text
stringlengths 8
5.77M
|
---|
Sex-specific expression, synthesis and localization of aromatase regulators in one-year-old Atlantic salmon ovaries and testes.
Transcripts for dax1, foxl2, mis and sf1 are co-expressed in the somatic companion cells of teleost germ cells. These regulatory factors function, in part, to modulate the transcription of aromatase, particularly cyp19a, the terminal enzyme of estrogen biosynthesis. At least two separate aromatase loci exist in teleost fish that encode distinct isoforms. The activity of two forms, cyp19a and cyp19b1, is predominantly associated with the ovary and the brain, respectively. We isolated sequences that compose the proximal promoters of cyp19a, cyp19b1 and foxl2a, to identify potential transcription factor binding motifs to define sex-specific regulatory profiles for each gene. We also provide evidence for the translation and immunological localization of DAX-1, FOXL2 and MIS to the endoplasmic reticulum and accumulation within secretory vesicles of the salmon oocyte. We found no evidence for the expression of CYP19A or CYP19B1 in the oocyte at the one-year-old stage. However, synthesis of both aromatases was localized to testicular germ and soma cells at this early stage of development. Production of these regulatory factors in the germ cells may serve to modulate the transcription and activity of endogenous aromatase and/or contribute to the differentiation of the neighbouring companion cells through secretory signaling. |
Mum Beats Off Trio Of Thugs With Furious Assault To Rescue Baby
Spanish police have revealed how a trio of brutal robbers who planned to kidnap a baby girl and use her to force her parents into opening a safe were chased off after a furious assault by the baby's mother.
The trio had expected most of the trouble would come from the two-month-old baby girl's father Denis Awolowo, 37, and as he climbed out of the car with his family they attacked him, stabbing him to the ground and shouting: "grab the girl".
But they reckoned without the fury of his wife Jamila, 29, who despite weighting just 7 stone launched herself at the robbers scratching their faces, ripping out hair and biting them before scrambling back into the car and slamming the doors shut – and locking them.
Police spokesman Mampu Valadez said: "They were completely caught off guard by her fury, they thought after immobilising her husband it would be easy to grab the kid and get her to do what they wanted, but they reckoned without a mother's instinct to protect her child."
The young family had only just driven up to their home in the city of Almeria, in south-east Spain, when a gang who had been hiding pounced on them. They suspected that the family had cash and valuables at home, and tried to break in without success. When they saw the family arriving the came up with the idea of grabbing the kid, and using it to force the parents to do what they wanted."
The officer said that the young woman had used "enormous violence" on the men who after making a half-hearted effort to break the window to the car, had fled before police could arrive.
Three days later, Spanish police managed to identify one of the kidnappers, 41, a Spanish man, and after interviewing him have arrested his two accomplices, both aged 48, and from Morocco.
The father was not badly hurt from his stab wounds and is recovering in hospital. The family were originally from Morocco but settled in Spain to build a new life for themselves.
Kangaroo That Hopped It Spotted In Garden
Austrian police investigating reports of an escaped kangaroo have been given concrete proof after a local woman managed to photograph the animal in her back garden.
First Images Of Eurovision Stage
These are the first images of the design for the Eurovision Song Contest stage unveiled in the Austrian capital Vienna which is the home of last year's winner Conchita Wurst.
Penelope Cruz To Play The Ice Cream Killer
The memoirs of the woman dubbed the Ice Cream Killer after she shot dead two ex-lovers before hacking the bodies up with a chainsaw and telling neighbours the noise was a new ice cream machine are due to be turned into a blockbuster with Penelope Cruz playing the lead role.
Our ombudsman David Rogers will try and help solve some of the problems from lazy civil servants through to incompetent companies – and at the very least the worst transgressors will end up in our weekly special report. |
As resistance to high-stakes testing has grown across the country, some states have experimented with non-testing based and local models of accountability reform. Texas is one such state, which implemented the ‘Community ... |
Corbett The Baagh Spa & Resort
MICE
Plan your important meetings, family gatherings, corporate gatherings and more with us.
Corbett the Baagh Spa & Resort presents a best in class well-equipped conference hall covering an area of 1800 sq ft. in Jim Corbett for big corporates as well as small businesses, for families and for friends.
If you are looking for a Resort in Jim Corbett with a conference hall for your Meetings, Conferences, Parties or any Event, Corbett the Baagh can be the best option for you.
Apart from your corporate Business Meetings in Jim Corbett, Celebrate your Private Parties, special events like Birthdays, Anniversary, Special dinners or more with your family and friends in wilderness of Jim Corbett only at Corbett the Baagh.
We provide facilities like overhead LCD projector, Laptop audio system with speakers, slide projector, white/flipchart board, Markers (Black or coloured), Microphones (Cordless and collar) and more at no extra cost. Other facilities like water bottles, snacks, buffet, tea/coffee, refreshments and more can also be arranged.
Apart from this, you can also enjoy a wildlife movie show at our conference room. |
Quantifying pulmonary hypertension in ventilated infants with bronchiolitis: a pilot study.
To determine whether previously well infants ventilated for bronchiolitis have sufficiently elevated pulmonary artery pressures (PAP) to warrant a trial of inhaled nitric oxide (iNO) therapy. Consecutive infants mechanically ventilated for bronchiolitis were offered Doppler echocardiography between 24 and 72 h after intubation. Patients were divided into those with normal PAP, mild, moderate or severe pulmonary hypertension. Patients with at least moderate pulmonary hypertension (systolic PAP > 30 mmHg and > 50% of systemic systolic arterial pressure) were offered a 60 min trial of iNO therapy at a concentration of 20 ppm and repeat echocardiography. Six infants (four preterm, two term) were studied at a mean corrected age of 13 weeks (4, 24). Respiratory syncytial virus was confirmed on immunofluorescence of nasal secretions in five of six subjects (84%). Echocardiography was performed (mean, 5.5 days) (95%CI 3.8-7.3) after the onset of symptoms. All patients had structurally normal hearts. Four patients had mild pulmonary artery hypertension and two had normal pulmonary artery pressures. None of the patients qualified for iNO therapy. The mean (range) duration of intubation was 14 days (9-19) and the duration of hospitalization was 28 days (14-42). All patients recovered. Significant pulmonary hypertension should not be presumed in previously well preterm and term infants ventilated for bronchiolitis. |
Iniencephaly with cyclopis (a case report).
Iniencephaly is a rare neural tube defect. We report a rare association of iniencephaly with cyclopia, probably the third such report in the literature. |
Q:
Selected Value of Spinner not displayed at textview after going back to the same page again
I faced the issue of not having the textview retaining the value that is selected from the spinner values list. After navigating to the same page, it just keep going back to the same value instead of the value the user has selected.
Here is the code that i have written. Thank You.
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_notifications);
final Switch mySwitch = (Switch) findViewById(R.id.switchNot);
final Spinner mySpin = (Spinner) findViewById(R.id.spinNot);
final TextView tvNot = (TextView) findViewById(R.id.tvTime);
mySwitch.setOnClickListener(new View.OnClickListener() {
SharedPreferences.Editor editor = getSharedPreferences("mapp.com.sg.sadtrial", MODE_PRIVATE).edit();
@Override
public void onClick(View v) {
if (mySwitch.isChecked()) {
editor.putBoolean("Switch", true);
editor.commit();
editor.putBoolean("Spinner",true);
editor.commit();
mySpin.setEnabled(true);
} else {
editor.putBoolean("Switch", false);
editor.commit();
editor.putBoolean("Spinner",false);
editor.commit();
mySpin.setEnabled(false);
}
}
});
final SharedPreferences sharedPrefs =
getSharedPreferences("mapp.com.sg.sadtrial", MODE_PRIVATE);
mySwitch.setChecked(sharedPrefs.getBoolean("Switch", false));
mySpin.setEnabled(sharedPrefs.getBoolean("Spinner",false));
mySpin.setOnItemSelectedListener(new AdapterView.OnItemSelectedListener() {
SharedPreferences.Editor editor = getSharedPreferences("mapp.com.sg.sadtrial", MODE_PRIVATE).edit();
@Override
public void onItemSelected(AdapterView<?> parent, View view, int position, long id) {
switch(position){
case 0:
tvNot.setText(mySpin.getSelectedItem().toString());
editor.putString("Option", mySpin.getSelectedItem().toString());
editor.commit();
break;
case 1:
tvNot.setText(mySpin.getSelectedItem().toString());
editor.putString("Option", mySpin.getSelectedItem().toString());
editor.commit();
break;
case 2:
tvNot.setText(mySpin.getSelectedItem().toString());
editor.putString("Option", mySpin.getSelectedItem().toString());
editor.commit();
break;
case 3:
tvNot.setText(mySpin.getSelectedItem().toString());
editor.putString("Option", mySpin.getSelectedItem().toString());
editor.commit();
break;
}
}
@Override
public void onNothingSelected(AdapterView<?> parent){
}
});
tvNot.setText(sharedPrefs.getString("Option", mySpin.getSelectedItem().toString()));
}
The picture of the left is the value displayed in the textview after user has selected from spinner.
The picture on the right shows the value at textview returning to default and is not retaining user's choice
A:
You are trying to set in textView but every time when you come it will try to set the default value 0 in spinner, So we should get the value and we can try to set that like below
Instead of the below line
tvNot.setText(sharedPrefs.getString("Option", mySpin.getSelectedItem().toString()));
Change like this.
String selectedValue = sharedPrefs.getString("Option",
mySpin.getSelectedItem().toString());
if (!TextUtils.isEmpty(selectedValue)) {
for (int i = 0; i < mySpin.getAdapter().getCount(); i++) {
String value = (String) mySpin.getAdapter().getItem(i);
if (selectedValue.equalsIgnoreCase(value)) {
mySpin.setSelection(i);
break;
}
}
}
|
1. Field of the Invention
The present invention relates to novel heat resistant polymers which are copolymerizates of a plurality of bis(maleimides), one of which being a bis(maleimide) containing a diorganopolysiloxane bridge in its molecular structure, aromatic diamines, and, optionally, other comonomers. This invention also relates to the preparation of such polymers.
2. Description of the Prior Art
It is known to this art (see French Patent FR-A-No. 1,555,564) that heat-resistant polymers may be prepared by reacting an N,N'-bis(imide) of an unsaturated dicarboxylic acid, such as, for example, an N,N'-bis(maleimide), with certain aromatic diprimary diamines. These polymers, which exhibit exceptional heat resistance, may be used for the manufacture of molded parts, laminates or shaped articles, with a view to the widest diversity of applications. |
The UK economy and society have well know and long standing challenges which threaten the health, wealth and well being of UK citizens.
Climate Change is possibly the most destructive rising force but there's also the challenges raised by the current technological revolution (necessitating business evolution to avoid business extinctions), high household debt (the most likely cause of the next UK recession), the NHS only just surviving (due to low funding with ever increasing demand for services), poor education (UK having the 'most illiterate teenagers in the developed world'), rampant in-equality and one of the worst qualities of life for citizens measured in Europe.
If we boil this down to the fundamentals (rather than fixing problems, they choose to add more problems), it seems not un-reasonable to suggest that they are providing idiotic leadership.
So why are UK politicians deliberately damaging the UK?
'The will of the people' is one recurring excuse (explored by this site here ) but it originates in a very dishonest form of populism that both leaders have been pedalling for a year wherein they a) devalue the views of the population who clearly disagree with them (half of the population becoming persona non-grata, irrelevant to the politician's ambitions) and also b) warp the EU referendum result into an outcome (leave the single market) that evidence shows the strong majority of people do not want.
That's not in doubt; most UK people wish to stay in the single market. Right from the start of the EU 'debate', Brexiteers like Gove, Hannan and the official leave EU campaign, sold Brexit on the basis of the UK staying in the single market; a ComRes/BBC poll found that 66% of the UK population want to stay in the single market and NatCen research found that 90% believe the UK should stay; the public desire to stay in the single market is clear, and impact predictions show the people to be wise on this point (the harder the Brexit, the worse the outcomes for the UK).
So the Will of the people is clearly just an excuse, and we need to look elsewhere for each politician's motivations. Let's start with Mr Corbyn.
Corbyn's goals
Corbyn's main complaints about the EU seem to be that
EU workers come to the UK and work for lower wages than he would like UK workers to be paid,
That EU Membership might stop his hopes to use public money to re-invigorate manufacturing (via state aid)
That he may not be able to re-nationalise utilities and transport (as he said on Andrew Marr's programme , "I think we have to be quite careful about the powers we need as national governments." and "[Single Market Membership] has within it restrictions on state aid and state spending").
There's a few obvious problems with these goals.
Having a highly motivated mobile workforce available for just-in-time low paid manual labour is a huge boon to any economy (In 2014 research estimated that EU workers "gave the economy a £4.4 billion boost" per annum as reported in the telegraph ) and it is genuinely not clear which UK workers would replace the EU ones. The UK already has low unemployment and it is very difficult (often disastrously so when week-to-week existence is hand to mouth) for unemployed UK citizens to stop benefits, work a few days, and then restart benefits).
So business models for UK businesses who've been relying on EU migrant labour for decades (i.e. 11% fall in EU workers going to Cornwall recently reported) are being destroyed (leading to businesses failing) and the hostile environment to EU workers is affecting more than low skilled temporary staff, with firmly established, highly skilled UK resident EU workers leaving at an alarming rate (one NHS related example here ).
Brexit is also damaging the sectors of the economy (services and consumer spending in particular) that the UK must rely on, until Mr Corbyn constructs his manufacturing nirvana, leaving less wriggle room for any such innovation.
All of which seems very likely to lead to lower productivity (already poor per UK worker, compared to leading EU countries) and lower tax take.
Technology progress since Corbyn entered politics (an MP since the early 1980's) means that an isolated country can't perform as efficiently as one that is a member of a frictionless trade bloc: re-organising supply chains to be smooth and quick has been the engine - the obvious, recurring go to 'quick win' - for profits in many sectors over the last few decades and leaving the EU is likely to enforce inefficiencies on supply chains (thereby disadvantaging the UK manufacturers that Labour is supposed to be helping: he might well simply be creating serious inefficiencies into which a labour government would uselessly pump public money).
There's also a fear that Corbyn wants to create state monopolies (Nationalisations do happen in EU member countries but constructing state monopolies is much more difficult whilst a member, so it is possible monopolies are his motive for leaving) which, along side increased union influence, would threaten the high productivity tools that technology and globalisation have granted to businesses (agility, flexibility, just-in-time supply chains, gig based employment: freedom to make profit in other words, which might become threatened in a more controlled socialist UK. Needless to say, social justice is crucial (and arguably has been neglected under the Tories) but it has to co-exist with profit generation/efficiency.
Detailed study of the steel industry issues and opportunities are out of scope for this article (there's a useful starter here ) but it is clear that the industry could be helped , whilst the UK is inside the EU, using a variety of tools which do not breach EU rules. Labour seem therefore be trying to use a sledgehammer (leaving the EU) to crack a nut explosively when much safer options exist.
On the whole, these types of goals seem to be unnecessarily risky - the damage being done by Brexit will make any national improvements that much harder to implement (like attempting to rescue a person from quick sand by first pushing them much deeper).
One other significant risk is that by co-operating with the Tories to enable Brexit*, Labour has opened the door to a hard right bonfire of regulation, rights, and responsible business principles.
* in case an explanation is needed, an actively campaigning pro EU opposition leader would likely have created a pro EU outcome in the referendum, and certainly wouldn't have recklessly supported Article 50.
It has been a huge gamble, with no visible signs that Corbyn understands the risks he's taken (although some of the unions have noticed, here for example), and the UK might end up swinging - at each change of parliament - from extreme left to extreme right and back again. EU membership provides stability by taking the edge off the possible extremes, which is good for businesses, families and the economy; Corbyn has helped moved the UK much closer to chaos.
He does occasionally makes more sensible noises than the Tories (he recently said the UK might stay in the single market) but the Labour party's apparent attitude to Brexit ('shsss, don't mention it or we will lose supporters') means it is very hard to judge when they are being honest. On the bright side though, at least Corbyn's approach can be understood using a traditional political ideology (fairly extreme Socialism).
May has proven much more resistant to analysis.
May's motivations
When May was appointed to the PM role, a feeling of relief was tangible in the UK. Some of the potential candidates for Tory leader seemed to be terrifyingly incompetent or media baron lap dogs, but she had campaigned (slightly) for remain so maybe she would be a sensible administrator who would help the country calm down?
Not so much, it turned out. "Brexit means Brexit", she un-helpfully parroted, and then re tasked the phrase "the will of the people" into an excuse for various attempts to subvert UK democracy through a range of badly implemented innovations. What could possibly have been motivating her chaotic approach?
It was at first possible to presume, as has been suggested by various commentators, that she was just the puppet of the half-hidden right wing forces who are cited as the driving forces behind Brexit (planning to profit from the chaos (money makes money, especially in times of change), to increase their own influence, or to avoid EU legislation on tax avoidance).
The strategic approach adopted to negotiations (bluffing with a hand that everyone knew was awful, angrily blaming the EU for anything that went wrong in the UK) meant the most logical conclusion was that May's government was genuinely going for no deal (to enable the hard right's zero tariff economy) with blame for the subsequent destruction being placed squarely - with the assistance of the most rabid tabloids - at the door of the EU.
That's still possible, but people with simple beliefs ("deliver zero tariffs, job done") are much more confident than May in debates. She argues weakly, if at all (relying largely it seems on memorised sound bites), with non of the (increasingly) deranged conviction of the more public spokespeople for this leave-at-any-cost approach, and when put under pressure (pre-2017 election) she responded with a scatter gun approach of extreme right wing policies alongside attempts to take the more left leaning centre ground.
Those far right forces are obviously a pressure on May (as are the Unions on Corbyn), but there's another theory which might explain some of the more random or chaotic behaviours: is the UK Prime Minister at heart, an old fashioned little Englander, deep down motivated by the impossible task of restoring the 'old days' when everything, according to folk memory, was better?
Anthony Barnett has published interesting work in which he perceives an 'intellectual void' in May (particularly ideologically) and he suggests that the void has been filled with a world view based firmly on the very limited perspectives advocated by the tabloid press, particularly he argues, that of the Daily Mail.
A conglomeration of those world views might lead to a (very generalised, from Burnett's work) personal belief that "Britain is inherently magnificent, British people unassailably superior to other nationalities, especially when the British people respect completely the British government, and if it weren't for outsiders trying to pull the UK down, the UK would definitely soar magnificently into global leadership and world domination, just like the good old days".
If you imagine Brexit is being led by someone influenced strongly by those views, which simply do not stand any evidence based examination (meaning rational arguments for them in debates, as May might have discovered, simply can not be formed), then the results you'd expect would be roughly what the UK government has achieved: with no discernible plan they issue vacuous threats, pursue abortive charm offences, deploy banal assertions that everything will be fine and Brexit will be easy, followed by quietly backtracking on every posture as reality dismantles each illusion that they construct.
Each person is a multifaceted jewel of course, comprised of infinite influences, so this article is overgeneralising out of necessity. However, adopting some combination of those two factors - shadowy, influential, right wing persuaders and a tabloid based world view - seems to have value as it moves us from a position of reactive surprise after each action by the UK Government (wherein every move they make seems to be satirical), to one where their actions become more comprehensible.
Conclusion
We should take time to acknowledge that both May and Corbyn are in complex positions for which neither of them are perfectly suited, working within a system (UK democracy) which has obviously become corrupted, being inside that system - between them for circa 60 years - for so long that they don't seem to notice how morally deficient (a not uncommon situation for UK politicians) and ill advised their actions are. So we should remember to have compassion for them: to a significant degree, they are doing their best rather than deliberately trying to hurt the UK.
They are hurting it though, exposing they UK to certain and unnecessary economic, cultural and societal damage through reckless negligence (collaborating to instigate article 50), manipulation of public opinion (by both of them claiming more mandate for change than the referendum could possibly have given), by attempting to avoid any debate that would challenge their extreme perspectives on Europe (denying a Brexit vote at the labour party conference for example, or the Tory refusal to issue briefing documents) and by attempting to line up fundamental ideological changes at the same time as invoking all of this Brexit damage.
And the national pain is at imminent risk - as uncertainty grows, the workforce leave, investment and consumer spending stalls and businesses re-locate - of escalating with exponential speed.
If these negative impacts of Brexit are to be curtailed, as the key exit bill debate and vote looms, UK MPs of all parties are going to have go outside the party political box to seek sensible balanced voices on Brexit. The alternative, MPs in slavish pursuit of their leader's out dated political dogma (left and right), seems likely to end in disaster.
|
Introduction {#sec1-1}
============
Online formative assessments (OFAs) have been increasingly recognised in medical education as resources that promote self-directed learning. With the shift from lecture-based to student-based instruction, there is a need to stimulate the inquiry phenomenon and actively engage the student in the learning process. Self-directed learning is identified to be a promising tool in preparing students for self-study and continuing professional education ([@ref1]). The self-directed learners are able to self-appraise their work, identify their strengths and weaknesses and seek, accept and use feedback from others in order to improve their performance ([@ref2]). Formative assessment can be defined as one form of self-assessment by the student, which intends to provide feedback to both the teacher and the student ([@ref3]). The faculty takes cognisance of the feedback to modify teaching and learning to meet the students' needs and for the students to identify their learning needs. As designers of medical curriculum are looking for strategies to invigorate the teaching and learning delivery methods, formative assessments are considered as means of ensuring deeper learning and understanding ([@ref4]). Bandura proposed that repeated exposure to successful testing experiences in students with increased anxiety will promote self-efficacy for subsequent tests ([@ref5]). The use of formative assessments, with no evaluation stress on the students, is the ideal exposure which has been proved to increase the positive experience in future testing event by reducing the cognitive stress anxiety ([@ref6]).
Formative assessment can be delivered as informal comments made at the end of a case presentation on a ward round to highly complex and formally structured computer-based learning tools ([@ref7]). Within the clinical context these formative assessment are used to encourage appropriate professional behaviour, to develop clinical competence and to stimulate acquisition of knowledge and clinical reasoning. From the clinician perspective, the time constraint is likely to impact the ability to provide a comprehensive formative assessment task to complement the learning ([@ref8]). Considering the clinician's time, production of formative assessment materials for the online medium is an expensive exercise. However, if time, effort and money are to be spent in this direction, there must be an evidence of its benefits and cost-effectiveness ([@ref9]).
There are studies demonstrating the benefits of web based formative assessment that students voluntarily take part in while preparing for a summative exam ([@ref10]-[@ref12]). The potential limitations with paper based formative assessments include time constraint for individualised feedback and the need for the students to be gathered at specific time and place to receive the feedback, which becomes a tedious task in the presence of large class size ([@ref13]). There is, therefore, an argument to move towards online formative assessments (OFAs).
The postulated advantages of online formative assessments (OFAs) include easy access and availability, utilising interactive features such as images, provision of immediate and individualized feedback, along with the scores allowing timely interventions ([@ref12],[@ref14]). The formative assessments are perceived to assist the students in terms of their extent of understanding the course material and therefore planning their subsequent learning activities ([@ref15]).
Many studies have investigated the effects of OFAs and the improvement of scores on the subsequent summative assessment. The mechanisms proposed are related to increasing student engagement, increasing time on task, preventing procrastination and identifying learning deficiencies through the formative feedback ([@ref12],[@ref14],[@ref15]). . Although the literature shows that the students participating in the OFAs achieved more, often in the form of a grade, all students do not tend to participate in such assessments ([@ref16],[@ref17]). This highlights the need to identify the reasons why some students do or do not use OFAs despite the demonstrated positive effects.
Our study is aimed to explore the educational value of OFAs in the department of Obstetrics and Gynaecology. Our hypothesis is that OFAs will have positive impact on the summative examination scores of the students. Based on this research question, online formative assessments are created using articulate quiz software and students are provided with unrestricted access and they elect to participate in these assessments voluntarily. It is anticipated that our study will identify the reasons why students do or do not participate in OFAs (OFA users and non OFA users). Our particular interest is to identify if there is any difference in the summative performance of the students among the OFA users and non OFA users. Insights into these aspects will provide information on the mechanisms that explain the relation between the OFAs and the final summative examinations and the personal learning styles and learning preferences of students. This would assist in formulating guidelines on designing and implementation of OFAs aligning with the curricular learning outcomes and students' learning needs.
Methods {#sec1-2}
=======
*The context* {#sec2-1}
-------------
Obstetrics and Gynaecology is one of the major disciplines with the students spending a total of 12 weeks with 7 weeks in year 4 and 5 weeks in year 5, which is the senior clerkship year. Currently, there are no online formative assessments in the year 4. The summative/End of posting (EOP) in the year 4 examination includes 30 one best answers (OBA), 3 short answer questions (SAQ) and 3 objective structured practical examination (OSPE) questions. This evaluation is conducted at the end of the 7 weeks of the posting.
The participants in this study were semester 8 students (n=90) in their 7 weeks of obstetrics and Gynaecology posting. This is a cross-sectional study conducted among fourth year students during their seven week postings in Obstetrics and Gynaecology. A convenient sample is taken and the students' participation is voluntary. Five sets of online formative assessments (OFAs) in the format of one best answers (OBA), Objective structured practical examination (OSPE) and Short answer question (SAQ) with feedback were delivered over five weeks through the online portal. The online formative assessments (OFAs) were prepared, using software -Articulate Quiz maker by the faculty teaching the course, thereby establishing content validity. This also ensured that style and difficulty of the questions were similar in both the formative and summative assessments. The questions present in the formative assessments covered the core content of the course syllabus, which is assessed during summative examinations. The assessments were delivered by university Moodle to which every student has access by their individual username and password. The question format is similar to summative examination that includes one best answer (OBA), OSPE identifying a detail (hot spot) on an image or drag and drop sequence and SAQ (OBA, n=40, OSPE, n=7 and SAQ n=3). Students will be made aware of the OFA's in the course syllabus by class announcements. The students' participation was voluntary and taking the OFA's was based on their own initiative either in the computer lab or in their own personal computer.
The assessments were organized into five assessments each (1/5) being available online starting from the second week of the posting and was open with unrestricted access for 24 hours and 7 days a week. By the end of the 6^th^ week all the five assessments (5/5) were available for the students. Automated feedback was given by the computer program after every attempt of answering the set of questions. The students' performance on first attempt of each assessment was reported and the mean marks of the 5 tests with the SD were computed.
This study was exempted from ethical clearance as the online formative assessments are ongoing curriculum activity and this pilot project was designed as quality assurance of the curriculum, the results of which will be analysed for feasibility of implementation across the clinical specialities.
*Data analysis* {#sec2-2}
---------------
The data collected was tabulated and analysed by using the Statistical Package for Social Sciences (SPSS) version 17.0. In this study, a p\<0.05 was considered statistically significant. The effectiveness of the OFA's is assessed by comparing the summative examination scores for students who used OFA's and did not use ('non-OFA's') by student t-test. The relationship between the scores of students using the OFA's and the summative assessment was determined by the Pearson's Correlation Coefficient.
The pattern and frequency of access of modules was analysed by week and time of day usage. The satisfaction surveys were utilized at the end of the posting by qualitative analysis using a survey questionnaire. The survey questionnaire has eleven items of closed ended questions and the response is obtained by five point Likert scale. The reliability and the internal consistency of the test items were measured by Cronbachs alpha. The factor analysis was performed to check the quality and integrity of the questionnaire and to see whether the factors are conceptually fitting together or not. It was based on Keyser-Meyer-Olkins (KMO) measure and Bartlett\'s Test.
Results {#sec1-3}
=======
*Comparing the summative scores of the OFAs and Non OFAs users* {#sec3-1}
---------------------------------------------------------------
Of the total of 92 students, 48.9% (n=45) participated in the assessments (OFA users) and 51.08% (n=47) did not participate in the assessments (Non OFA users). The End of posting summative examination marks for OFA users and Non OFA users showed no significant difference with the mean scores of 64% & 66% in the OBA (p=0.902) 54% & 50% in the OPSE (p=0.633) and 58% & 57% in the SAQ (p=0.248). The Non OFA users demonstrated similar performance as the OFA users in all the components of the summative examinations, [Table 1](#T1){ref-type="table"}.
######
Summative examination scores for students who used OFA's and did not use ('Non-OFA's') compared by independent student t-test
OFA/Non OFA users EOP/Summative OBA Mean±SD EOP/Summative OSPE Mean±SD EOP/Summative SAQ Mean±SD
------------------------------ --------------------------- ---------------------------- ---------------------------
OFA users (n=45, 48.9%) 64.66±9.5 54.81±12 58.25±9.3
Non OFA users (n=47, 51.08%) 66.38±8.9 50.24±13 57.05±7.6
p 0.902 0.633 0.248
However, there is a moderate positive correlation r=0.46 between the mean scores of the OFA users and the end of posting summative examination in the one best answer (OBA) component (p\<0.001). On testing the summative performance on six similar concept questions of formative assessments with different content, the OFA users performed better with 85.4% scored correct responses compared to 46.8% in the formative assessments. Among the Non OFA users, 62% of them had correct responses to these questions ([Table 2](#T2){ref-type="table"}).
######
Total number of OFA users who answered correct/ Missed of similar concept questions in summative assessments
Concepts tested Missed in formative assessment (OFA users n=45) 46.8% Correct in summative assessment (OFA users n=45) 85.4% Missed in both formative and summative assessment (OFA users n=45)
--------------------------------------- ------------------------------------------------------- -------------------------------------------------------- --------------------------------------------------------------------
1\. Induction of labour protocol 31 (68.8%) 42 (93.3%) 3 (6.6%)
2\. Heart disease risk stratification 12 (26.6%) 40 (88.8%) 5 (11%)
3\. IUGR monitoring 24 (53.3%) 36 (80%) 9 (20%)
4\. Diagnosis of pre-eclampsia 12 (26.6%) 41 (91.1%) 3 (6.6%)
*Pattern and frequency of usage of OFA's* {#sec3-2}
-----------------------------------------
The five assessments were viewed a total of 247 times by 45 students (48.9%) in 5 weeks' time. The maximum views were in the fourth week of the posting. Mean time taken for completion was 11:13 minutes. The weekends, Friday (37.3%) and Saturday (48.4%) were the days of maximum completion of the assessments. There was a significant after hour use of the assessments (8:00pm-11:00pm) and an increased access close to summative examination.
*Perception of the intervention by survey questionnaire* {#sec3-3}
--------------------------------------------------------
### *Quantitative analysis of the survey* {#sec3-3-1}
The Cronbach's alpha (an estimated of internal consistency) was 0.96 for the present study. The factor analysis of 11 items was examined. The Kaiser-Meyer-Olkin measure of sampling adequacy was 0.83, above the recommended value of 0.6, and Bartlett's test of sphericity was significant (X 2=615.64, p\<0.000). This suggested that each item shared some common variance with other items and therefore reliability was achieved ([Table 3](#T3){ref-type="table"}).
######
Perception of the OFA assessment structure and content
Statement SA/A Uncertain SDA/DA
-------------------------------------------------------------------------------------------- ------------ ------------ ----------
The OFA's appropriately tested the intended learning objectives of the task based learning 26 (57.8%) 15 (33.3%) 4 (8.9%)
The OFA'S feedback provided is timely and relevant 25 (55.6%) 16 (35.6%) 4 (8.9%)
The OFA's feedback presents new knowledge in the content areas 27 (60%) 16 (35.6%) 2 (4.4%)
The OFA's helped me to identify my area of weakness 27 (60%) 16 (35.6) 2 (4.4%)
The OFA's are effective learning tools 27 (60%) 16 (35.6%) 2 (4.4%)
The OFA's motivated me to study 28 (62.2%) 15 (33.3%) 2 (4.4%)
The OFA's has Improved my ability of self-assessment of performance 28 (62.2%) 15 (33.3%) 2 (4.4%)
I could complete the OFA's on time 26 (57.8%) 15 (33.3%) 4 (8.9%)
There was no problem with log in and access to the OFA's 31 (68.1%) 13 (28.1%) 1 (2.2%)
I was able to navigate through the OFA's effectively 30 (66.7%) 14 (33.1%) 2 (4.4%)
The OBA (One best answer) images of the OFA's are clear 27 (60%) 16 (35.6%) 2 (4.4%)
Overall, 44.4% of students rated the online formative assessments as good, 31.1% as very good and 24.4% as satisfactory. About 64.2% felt that the OFA's fulfilled its stated aims and objectives and 77.1% felt that they would persuade their peers to participate in the OFA's. About 62.2% perceived that participation in the assessments had improved their ability of self-assessment of performance. About 60% strongly agreed that OFA's had presented new knowledge to the content ([Figure 1](#JAMP-6-51-g001.tif){ref-type="fig"}).
![Perception of the intervention, Quantitative analysis by survey questionnaire](JAMP-6-51-g001){#JAMP-6-51-g001.tif}
Discussion {#sec1-4}
==========
In the pursuit of inventing new strategies to deliver the curriculum, our shift is towards meaningful teaching and learning tools that can challenge students to 'stretch further than they think they can' ([@ref18]). Assessments are one way of challenging the students that enhance the engagement as long as the challenges are associated with swift focused feedback. Formative assessments well fit the model that have the ability to foster student engagement and deliver purposeful learning ([@ref19]). However, the emphasis must also be on assessing and providing evidence on the educational gains of such creative strategies. The primary purpose of our study was to determine if the OFAs would improve the summative exam scores. There are studies reporting that formative quizzes enhance the summative exam performances in the undergraduate medical and dental students ([@ref20]-[@ref23]). However, there are few research findings suggesting that formative assessments do not enhance summative exam scores ([@ref24]-[@ref26]).
The results of our study showed that mean summative exam scores for OFA users and non-OFA users were almost equal. The non-OFA users demonstrated similar performance as the OFA users in all the components of the summative examinations. There was a moderate correlation of mean scores of the one best answers component (OBA) of the formative assessments with the summative assessments. Our study did not demonstrate statistically significant improved performance of the summative assessments between OFA users and non-OFA users. These findings contradict the concept that retrieval practice, quizzing in particular, directly boosts exam outcomes, and overall academic performance ([@ref27]). However, the OFA users performed better on similar concepts questions compared to the non-OFA users in the summative assessments. The format and content of the OFA's material is similar although they did not mirror the summative assessments, so rote learning is not a reasonable conclusion to draw.
In our study only half of the students (48.5%) participated in the OFA's. The low participation is perhaps related to the students' motivation. The key motivating factors for assessments are the perceived relevance of these assessments to preparing them for high stake exams, the peer influences and teachers enthusiasm ([@ref28]). While the first two items are within the realm of student strategic considerations of what to learn that would benefit them, the third item is an external influence related to teacher's factors who, due to busy clinical duty, may not find the necessary to time to build the enthusiasm and support system. The students who participated were more curious and motivated and had better study habits. These traits would have contributed to satisfactory performance outcomes rather than the effect of the resources itself.
Nonetheless, one study reported there was no effect of online formative assessments on the students' final assessments as measured by their entry grade point average (GPA) ([@ref29]). However, the design of our study could not account for such a causal relationship that would mandate randomisation of group to OFA and non OFA users that would deprive one group of the opportunity to take formative assessments during the trial period. While it could be argued that the aim of formative assessment is not so much to raise the standards of attainment as to foster the spirit of learning, it appears that OFA's are perceived as a useful resource that enables students to self-regulate their learning process ([@ref9]). According to Sadler and Hattie and Timperley, the three factors that are motives for students for using formative assessments are (a) feed up, (b) feedback and (c) feed forward ([@ref30],[@ref31]). Our study confirms the findings that students consider feedback function being an important factor to use the OFA's as it helps them to check their understanding and guide their future learning directions which are again the feed forward function of the OFA's. Furthermore, the students perceived that OFA's have given them an idea about what is expected of them in the summative assessment in terms of both content and form. Regarding the pattern of usage of OFA's, there is a non-uniform temporal fluctuation of usage of the OFA's, which is more pronounced towards close to summative examinations dates, supporting the notion that students have viewed OFA's as learning tools that would prepare them for the high-stake exams. This supports the feed up function of OFA's ([@ref31],[@ref32]). However, with our study findings suggesting multiple attempts on OFAs have not resulted in increased performance in the summative exams, it can be proposed that the summative examination performance is primarily influenced by the inherent properties of the students, rather than the salient effects of formative assessment with feedback itself.
Implications for future research and practice {#sec4-1}
---------------------------------------------
The evidence from the feedback of the students is encouraging enough to place value on OFA's as valuable learning tools. While there was considerable time, effort was directed in the generating these OFA's, the limited uptake by the students' needs reflection on the teacher's role on how to stimulate motivation and direction for student centred learning. The findings of the study have implications for postgraduates and specialist trainees who have formidable time constraints and having well-designed OFA's will be of considerable benefit for non-threatening feedback on their knowledge and clinical decision-making. For future research perspective, it is worth determining whether the learning benefits of OFA's for junior medical students (semester 8), which we have demonstrated, persist into senior clerkship of medical programs.
Limitations {#sec4-2}
-----------
One of the limitations of the study is that we did not explore information on the reasons why students elected not to participate in the assessments. The small sample size precludes the generalisation of our results. Ours was a convenient sample of all students in the Obstetrics & Gynaecology rotation in year 4. It is also possible that the observed effects would be substantially influenced by the speciality and there is a need to examine the effects in larger samples and across other specialities before we conclude if the OFA's are robust enough as educational interventions.
Conclusions {#sec1-5}
===========
Our experience with online formative assessments is a demonstration of utilising technology to supplement traditional assessments to provide additional learning platform for the students. Although we cannot conclusively provide evidence that the OFA's improve the final summative scores, there is moderate correlation of mean scores of the one best answers component of the formative assessments to the summative assessments. The OFA users performed better on similar concepts with different content questions compared to the non-OFA users. This reinforces that repeated assessments of questions related to those of exams but focussing on various aspects of content can produce consistent improvement in the exam performance. The students perceived that the assessments improved their knowledge and their self- assessment ability to tailor learning for their individual learning needs and style. The usage pattern demonstrates that the flexibility of an e-learning technology that allows the students to access the resource materials at their convenient time. The significantly higher use of assessments towards the final weeks of the posting implies that the students were using the formative assessment as preparation for the exams rather than to facilitate the learning processes. Our future efforts will be directed towards improving the assessment contents to improve the students' learning experience overall and to include them in the development of online resources to optimize their future usage.
The authors would like to thank the International Medical University for their support in conducting this study and Ms Aida, Senior executive, E learning for her assistance in the construct and designing the online formative assessments.
**Conflict of Interest:**This work was supported by the FAIMER Fellowship, Mumbai. The authors disclose no other financial support or conflict of interest.
|
Charge-density distribution and electrostatic flexibility of ZIF-8 based on high-resolution X-ray diffraction data and periodic calculations.
The electron-density distribution in a prototypical porous coordination polymer ZIF-8 has been obtained in an approach combining high-resolution X-ray diffraction data and Invariom refinement. In addition, the periodic quantum-chemical calculation has been used to describe the theoretical density features of ZIF-8 in the same geometry (m1) and also in a "high-pressure" form of ZIF-8 (m2) characterized by conformational change with respect to the methylimidazolate linker. A thorough comparison of the electronic and electrostatic properties in two limiting structural forms of ZIF-8 proposes additional aspects on diffusion and adsorption processes occurring within the framework. The dimensions of the four-membered (FM) and six-membered (SM) apertures of the β cage are reliably determined from the total electron-density distribution. The analysis shows that FM in m2 becomes competitive in size to the SM aperture and should be considered for the diffusion of small molecules and cations. Bader's topological analysis (quantum theory of atoms in molecules) shows similar properties of both ZIF-8 forms. On the other hand, analysis of their electrostatic properties reveals tremendous differences. The study suggests exceptional electrostatic flexibility of the ZIF-8 framework, where small conformational changes lead to a significantly different electrostatic potential (EP) distribution, a feature that could be important for the function and dynamics of the ZIF-8 framework. The cavity surface in m1 contains 38 distinct regions with moderately positive, negative, or neutral EP and weakly positive EP in the cavity volume. In contrast to m1, the m2 form displays only two regions of different EP, with the positive one taking the whole cavity surface and the strong negative one localized entirely in the FM apertures. The EP in the cavity volume is also more positive than that in m1. A pronounced influence of the linker reorientation on the EP of the ZIF-8 forms is related to the high symmetry of the system and to an amplification of the electrostatic properties by cooperative effects of the proximally arranged structural fragments. |
Q:
Virtual Attribute in Rails 4
I have a model of product and I need to write in the _form view , the number of the product that an admin wants to insert.
I have another table with the Supply (number of product)
so in my product table I don't have the attribute quantity , but I have just the supply_id (that links my two tables of product and supply)
Since I don't have the quantity in my product table, I used a virtual attribute on Product.
I had to change the view of the new and edit product
cause in the new I want the field quantity but in the edit I don't want (cause I use another view to do this)
So, I deleted the partial _form and created separate view.
Also, I had to set in the controller of products that if I want to update a product, i have to call a set_quantity callback, cause I have to insert a "fake" value to fill the params[:product][:quantity]. This , because I setted the validation presence true ,on the quantity virtual field in the product model. I want to know , if all this story is right (it works , but I want a suggestion about the programming design of this story. Cause I don't like the fact that I give a fake value to fill the quantity field when I have to update a product)
Controller:
class ProductsController < ApplicationController
include SavePicture
before_action :set_product, only: [:show, :edit, :update, :destroy]
before_action :set_quantita, only: [:update]
....
def set_quantita
params[:product][:quantita]=2 #fake value for the update action
end
....
end
Model:
class Product < ActiveRecord::Base
belongs_to :supply ,dependent: :destroy
attr_accessor :quantita
validates :quantita, presence:true
end
Can you say me if there is a better way to fill the param[:product][:quantity] in the case of the update action? Cause i don't like the fact that i give it the value of 2. Thank you.
A:
Instead of using attr_accessor you could create custom getter/setters on your product model. Note that these are not backed by an regular instance attribute.
Also you can add a validation on the supply association instead of your virtual attribute.
class Product < ActiveRecord::Base
belongs_to :supply ,dependent: :destroy
validates_associated :supply, presence:true
# getter method
def quantita
supply
end
def quantita=(val)
if supply
supply.update_attributes(value: val)
else
supply = Supply.create(value: val)
end
end
end
In Ruby assignment is actually done by message passing:
product.quantita = 1
Will call product#quantita=, with 1 as the argument.
Another alternative is to use nested attributes for the supply.
class Product < ActiveRecord::Base
belongs_to :supply ,dependent: :destroy
validates_associated :supply, presence:true
accepts_nested_attributes_for :supply
end
This means that Product accepts supply_attributes - a hash of attributes.
class ProductsController < ApplicationController
#...
before_action :set_product, only: [:show, :edit, :update, :destroy]
def create
# will create both a Product and Supply
@product = Product.create(product_params)
end
def update
# will update both Product and Supply
@product.update(product_params)
end
private
def product_params
# Remember to whitelist the nested parameters!
params.require(:product)
.allow(:foo, supply_attributes: [:foo, :bar])
end
# ...
end
|
We can offer various dominoes game sets, with sizes ranging from 2,805 to 5,211. Different colors ar We can offer various dominoes game sets, with sizes ranging from 2,805 to 5,211. Different colors are available for white, red, blue, black, and ivory. We can manufacture products with customers' logos on.
... ) PHATHALATE free; 3) BSCI approved. Wooden dominoes/Giant Dominoes /kids game/wooden toy 28 Pcs dominoes packed in black carry bag with colour ... it is Fun for all ages with these giant sized dominoes which can be used both indoors and outdoors To win ... score 100 pointsOne set include 28pcs wooden dominoes in a wooden box.Place all the dominoes face down on the table and ...
Test the water amount in petroleum products, The result shows in percents. Test the water amount in petroleum products, The result shows in percents.Technical Parameters: Volume of distilling flask: 500ml Heating power: 1000W Length of straight condensation tube: 250-3000mm |
---
abstract: 'We present late-time optical images and spectra of the Type IIn supernova SN 1986J. [*HST*]{} ACS/WFC images obtained in February 2003 show it to be still relatively bright with m$_{F606W}$ = 21.4 and m$_{F814W}$ = 20.0 mag. Compared against December 1994 [*HST*]{} WFPC2 images, SN 1986J shows a decline of only $\la$1 mag in brightness over eight years. Ground-based spectra taken in 1989, 1991 and 2007 show a 50% decline in H$\alpha$ emission between $1989-1991$ and an order of magnitude drop between $1991-2007$, along with the disappearance of line emissions during the period $1991-2007$. The object’s \[\] $\lambda\lambda$6300, 6364, \[\] $\lambda\lambda$7319, 7330 and \[\] $\lambda\lambda$4959, 5007 emission lines show two prominent peaks near $-$1000 and $-$3500 , with the more blueshifted component declining significantly in strength between 1991 and 2007. The observed spectral evolution suggests two different origins for SN 1986J’s late-time optical emission: dense, shock-heated circumstellar material which gave rise to the initially bright H$\alpha$, , and \[\] $\lambda$5755 lines, and reverse-shock heated O-rich ejecta on the facing expanding hemisphere dominated by two large clumps generating two blueshifted emission peaks of \[\], \[\], and \[\] lines.'
author:
- 'Dan Milisavljevic, Robert A. Fesen, Bruno Leibundgut, and Robert P. Kirshner'
title: 'The Evolution of Late-time Optical Emission from SN 1986J'
---
Introduction
============
With a peak flux density of 128 mJy at 5 GHz, SN 1986J is one of the most radio-luminous supernovae ever detected [@Weiler90]. The supernova probably occurred early in $1983$ in the edge-on galaxy NGC 891, more than three years before its August 1986 discovery in the radio [@vanGorkom86; @Rupen87; @Bietenholz02]. With its optical outburst going unnoticed, the earliest optical detection showed the supernova at a magnitude of 18.4 in $R$ in January of 1984 [@Rupen87; @Kent87].
SN 1986J is classified as a Type IIn (see @Schlegel90) and has been compared with other luminous SNe IIn like SN 1988Z and SN 1995N. Optical spectra of SN 1986J obtained in 1986 showed prominent and narrow ($\Delta v
\la$ 700 km s$^{-1}$) H$\alpha$ emission with no broad component [@Rupen87]. Emission lines of \[\], \[\], and \[\] had somewhat larger widths of 1000 $< \Delta v <$ 2000 km s$^{-1}$. Many narrow and weak emission lines including those from helium were also observed. Spectra taken three years later in 1989 showed that the dominant narrow H$\alpha$ emission had diminished in strength, with the forbidden oxygen emission lines relatively unchanged (@Leibundgut91; hereafter L91).
Early very long baseline interferometry (VLBI) revealed an aspherical source with marginal indication of an expanding shell [@Bartel89; @Bartel91]. Subsequent VLBI observations show a distorted shell and a current expansion velocity of $\sim$6000 km s$^{-1}$, considerably less than an extrapolated initial velocity of $\sim$20 000 km s$^{-1}$ [@Bietenholz02].
In this Letter, we present Hubble Space Telescope ([*HST*]{}) images of SN 1986J showing it to still be relatively luminous optically more than two decades after outburst. We also present ground-based optical spectra obtained at three epochs spanning 18 yr to follow its late-time emission evolution.
Observations
============
Images of NGC 891 in the region around SN 1986J obtained by the Advanced Camera for Surveys (ACS) system aboard the [*HST*]{} using the Wide Field Channel (WFC) on 2003 Feb 18 and 20 were retrieved from STScI archives (GO 9414; PI:R.de Grijs). The images were taken using filters F606W and F814W. Standard IRAF/STSDAS data reduction was done including debiasing, flat-fielding, geometric distortion corrections, photometric calibrations, cosmic-ray and hot pixel removal, with the STSDAS `drizzle` task used to combine exposures.
Low-dispersion optical spectra of SN 1986J were obtained with the MMT on Mount Hopkins using the Red Channel spectrograph with a TI $800 \times 800$ CCD on 1989 September 5 and again on 1991 October 14. For both observations, a 2$\arcsec$ wide slit and a 150 lines mm$^{-1}$ grating was used to obtain spectra spanning 4000–8000 Å, with a resolution of $\sim$30 Å. Total exposure times for each spectrum was 9000 s.
Spectra of SN 1986J were also obtained on 2007 September 11 at MDM Observatory using the 2.4 m Hiltner telescope with the Boller & Chivens CCD spectrograph (CCDS). A north–south 1.5$\arcsec \times$ 5$\arcmin$ slit and a 150 lines mm$^{-1}$ 4700 Å blaze grating was used to obtain two sets of spectra, one consisting of $3 \times 1800$ s exposures spanning 4000 – 7100 Å, and another of $2 \times 1800$ s exposures covering 6900–9000 Å using an LG 505 order separation filter. Both spectra were of resolution of $\sim$10 Å. Another spectrum with this same setup consisting of $2 \times 3000$ s exposures was taken on 2007 Dec 20 spanning 6100 – 9000 Å. Seeing was around 1$\arcsec$ for all spectra. The spectra were processed using standard procedures in IRAF[^1] using standard stars from @Strom77.
Results
=======
Late-Time Optical Photometry
----------------------------
The left panel of Figure 1 shows the blue DSS2 image of NGC 891 with the location of SN 1986J marked, and the two right panels show [*HST*]{} ACS/WFC images of NGC 891 centered on the region around the supernova. With 2003 epoch VEGAMAG apparent magnitudes of m$_{F606W}$ = 21.4 and m$_{F814W}$ = 20.0 mag, these images indicate that SN 1986J has remained relatively bright nearly two decades after the estimated 1983 optical outburst.
The SN 1986J site in NGC 891 was also imaged some eight years earlier on 1994 December 1 by [*HST*]{} with the Wide Field Planetary Camera 2 (WFPC2) using the F606W filter (see @vanDyk99). Reduction of these 1994 data show m$_{F606W} =$ 21.3 mag. Accounting for differences in instrumental response between the WFPC2 and ACS, these observations suggest a decline of only $\la$ 1 mag in the F606W filter over the eight years separating the observations.
Emission Line Changes Since 1989
--------------------------------
In Figure 2 we present optical spectra of SN 1986J at three epochs spanning 18 years: 1989.7 (published by L91) 1991.8, and $2007$. The 2007 spectrum is an average of the three spectra obtained in September and December of 2007, with the combined relative fluxes believed accurate to within $\pm$20%.
Between the three epochs, SN 1986J’s optical emission shows several significant changes. The greatest change is the decline in H$\alpha$ emission. As of 2007, the H$\alpha$ line observed centered around 6564 Å has a flux of 4 $\times 10^{-16}$ erg s$^{-1}$ cm$^{-2}$, down some 20 times from 9 $\times
10^{-15}$ erg s$^{-1}$ cm$^{-2}$ observed in 1989. The \[\] $\lambda\lambda$6548,6583 emission lines are not resolved, and we measure the H$\alpha$ emission in 2007 assuming nitrogen contributes approximately 1/4 of the total integrated flux around the line. We estimate that H$\alpha$ emission declined by a factor of 2 between 1989 and 1991 and by a factor of 10 between 1991 and 2007.
The line center for SN 1986J’s H$\alpha$ emission also showed an increasing blueshift between 1989 and 2007. In 1986, the line center roughly matched NGC 891’s redshift of +528 km s$^{-1}$ [@Rupen87; @deVaucouleurs91]. However, by 1989 the shift had decreased to +330 km s$^{-1}$ (L91), and in 2007 the center of the observed H$\alpha$ emission shifted still more to the blue, virtually negating the galaxy’s systemic velocity and appearing practically unredshifted.
Other changes in the late-time spectra of SN 1986J include the fading beyond detectability of H$\beta$ and He I $\lambda$5876 and $\lambda$7065 emission lines in the 2007 spectrum which were present in 1989 and 1991. Also considerably diminished in 2007 is the emission associated with the \[\] $ \lambda\lambda$4959, 5007 emission observed around 4980 Å.
Changes in the profiles of some emission lines are evident. Broad emission around 7300 Å consisting of two prominent emission peaks at $\simeq$7250 Å and 7320 Å seen in both the 1989 and 1991 spectra exhibits a significant diminishment along its blue side in the 2007 spectrum (see Fig.2). We identify both emission peaks with the \[\] $\lambda\lambda$7319, 7330 line emission. Although some contribution from \[\] $\lambda\lambda$7291,7324 is possible, \[\] emission likely dominates the broad $7200-7350$ Å emission.
The bluer emission peak at 7250 Å, prominently visible in 1989 and 1991 spectra, faded significantly by 2007, evolving into a weak, broad emission feature blueward of 7300 Å. The other emission peak at 7320 Å showed a smaller intensity decline and a shift to the red by $\sim$10 Å relative to its appearance in 1989. Lastly, faint redshifted \[\] emission extending from 7380 to 7450 Å visible in the 1989 spectrum gradually weakened and decreased in velocity in the 1991 and 2007 spectra.
Other features show only minor changes in strength and/or profile. The broad emission centered around 6295 Å identified with the \[\] $\lambda$6300 line has a 2007.8 epoch flux of 1.4 $\times 10^{-15}$ erg s$^{-1}$ cm$^{-2}$ assuming a ratio of 3:1 for the 6300 Å and 6364 Ålines. This is roughly the same as the combined flux of 2.7 $\times 10^{-15}$ erg s$^{-1}$ cm$^{-2}$ for the two lines lines measured in 1989 by L91.
Discussion
==========
A steady drop in H$\alpha$ emission strength together with smaller declines in forbidden oxygen emissions are consistent with our estimated m$_{F606W} \la$ 1 mag decline between 1994 and 2003. To investigate possible changes in emission since then, we used `synphot` to compare the count rate of the 2003 F606W image against the expected count rate of the ACS/WFC F606W given the 2007 spectrum as input. The rates are marginally different and within the uncertainties associated with the relative flux of the spectra and light loss from the slit. This suggests that SN 1986J’s optical flux has not deviated appreciably from a continued, slow decline over the last four years.
Emission-Line Profiles from O-Rich SN Ejecta
--------------------------------------------
While a blueshifted, double-peak emission profile is most apparent in the \[\] $\lambda\lambda$7319,7330 lines in the 1989 and 1991 spectra, in fact, all of SN 1986J’s oxygen emissions display a similar double-peak emission profile blueshifted with respect to the host galaxy’s rest frame. Figure 3 presents an overlay of \[\], \[\], and \[\] line profiles plotted in velocity space. This figure shows good agreement for both the line profiles and emission peaks near $-$1000 and $-$3500 .
Added support for double-peak oxygen line profiles comes from faint emission near 4340 Å present in both 1989 and 1991 spectra which we interpret as \[\] $\lambda$4363 line emission (see Fig. $2$ in L91). When corrected for NGC 891’s redshift, the positions and widths of the peaks observed at 4313 Å and 4338 Å match the $-$1000 and $-$3500 emission peaks observed in the other oxygen profiles. After correcting for foreground reddening of $A_{V} = 1.5$ mag [@Rupen87], the observed \[\] I(4959+5007)/I(4363) line ratio $\simeq2$ suggests an electron density for the \[\] emitting region of $(3-5) \times 10^{6}$ cm$^{-3}$ assuming an electron temperature of $(2.5-5.0) \times 10^4$ K like that found in shock-heated O-rich ejecta seen in young supernova remnants [@HF96; @Blair00].
An interpretation of a spectrum dominated by two blueshifted, O-rich ejecta clumps is a very different one from that proposed by L91 for explaining the box-like \[\] $\lambda\lambda$4959, 5007 profile. They suggested that the observed shape was due to the $\lambda$5007/$\lambda$4959 line ratio being close to 1:1 instead of the optically thin ratio of 3:1 typically observed in low density nebulae. A ratio close to unity for both the \[\] and \[\] line doublets were interpreted as caused by emission originating from regions with electron densities of $n_{e} \sim 10^{9}$ cm$^{-3}$. However, in light of the strong similarity of all oxygen emission profiles, such high density estimates appear no longer valid.
Origin of the Late-Time Optical Emission
----------------------------------------
Our interpretation of line emission profiles together with the observed spectral evolution over the last two decades suggests two separate sites for SN 1986J’s late-time optical emission. The decline of SN 1986J’s H$\alpha$ emission and its relatively low expansion velocity ($< 700$ km s$^{-1}$) suggests this emission comes from shock-heated circumstellar material (CSM). Early spectra showing an initially very bright H$\alpha$ emission along with fainter emissions from and \[\] are consistent with an emission nebula generated by a $\sim 1.5 \times 10^{4}$ blast wave overrunning a dense CSM environment rich in CNO-processed material [@Rupen87]. The apparent blueshift in the line center of H$\alpha$ over the past 20+ years is likely due to increasing extinction of the receding hemisphere possibly due to dust formation in the SN ejecta.
@Chugai94 suggested SN 1986J’s late-time optical emission originates from shocked dense clouds of circumstellar gas in the progenitor star’s clumpy pre-SN wind. The presence of \[\] $\lambda$5755 line emission and the lack of strong \[\] $\lambda\lambda$6548, 6583 emission [@Rupen87] supports this scenario, suggesting relatively high densities ($n_{e}$ $\sim
10^{6}$ cm$^{-3}$) similar to that seen in the circumstellar ring around SN 1987A.
![SN 1986J’s 1989 oxygen line profiles. Velocities shown are with respect to 6300, 7325, and 5007 Å in the rest frame of NGC 891. Vertical dashed lines are positioned at $-$1000 km s$^{-1}$ and $-$3500 km s$^{-1}$.](f3.eps){width="7.5cm"}
The interaction of the SN’s outward-moving blast wave with dense and clumpy CSM will generate a strong reverse-shock into slower expanding SN ejecta, leading to the observed forbidden oxygen line emissions. The presence of two prominent blueshifted emission peaks across three ionization stages implies this component of SN 1986J’s optical emission comes mainly from two large patches of reverse shock-heated, O-rich ejecta on the facing expanding hemisphere having radial velocity components in our line-of-sight around $-1000$ and $-3500$ .
The gradual redward shift of the $-1000$ emission component toward smaller blueshifted velocities seen most clearly in the \[\] $\lambda\lambda$7319,7330 profile between 1989 and 2007 may signal the progression of reverse shock emission coming from inner, slower moving O-rich ejecta during the intervening two decades. Additionally, weak emission seen redward of 7330 Å together with weak emission near 5050 Å possibly associated with \[\] might indicate highly reddened O-rich ejecta located on the rear expanding expanding hemisphere with radial velocities up to $3500$ .
Finally, mention should be made concerning the possibility for photoionization of SN 1986J’s ejecta by its bright central compact source [@Chevalier87]. Early optical and radio observations of SN 1986J suggested that it may be a very young Crab Nebula-like remnant [@Chevalier87; @Weiler90] and such a connection has been strengthened by recent VLBI observations showing a bright, compact radio component with an inverted spectrum near the center of the expanding shell [@Bietenholz04; @Bietenholz08]. This central source is thought to be either emission from a young, energetic neutron star or accretion onto a black hole. The optical filaments in the Crab Nebula are mainly photoionized by its pulsar. With SN 1986J’s central component some 200 times more luminous than the Crab Nebula between 14 and 43 GHz [@Bietenholz08], this raises the possibility of photoexcitation of SN 1986J’s ejecta.
However, during its first decade of evolution, SN 1986J’s strong \[\] $\lambda$4363 line suggested temperatures more indicative of shock heating ($T
\ga 25 000$ K) rather than photoionization ($T \leq 15 000$ K). Additionally, the high densities of the O-rich ejecta and/or formation of dust in the ejecta could limit the importance of photoionization by the central source. Indeed, the large Balmer decrement ratio of H$\alpha$/H$\beta$ $\sim 45$ (1986.8 epoch; L91) observed in early spectra may be an indication of high internal extinction.
At its current age ($\sim25$ yr), the importance of photoexcitation from the central source, quite possibly a young Crab-like neutron star, is less clear. Recent observations with [*XMM-Newton*]{} and [*Chandra*]{} have shown a sharp decline in X-ray luminosity, perhaps signaling a diminishing role of shock-heating in SN 1986J’s late-time optical emission [@Temple05]. Future increased contribution from photoionization could be reflected as broadening in optical emission line widths like that predicted by @Chevalier94.
In view of SN 1986J’s strong oxygen line emissions, a better comparison than the Crab might be the $\simeq 1000$ yr old LMC remnant 0540-69.3. This remnant has a bright pulsar wind nebula surrounding a 50 ms pulsar and shock-heated, O-rich ejecta expanding at velocities $\sim2000$ km s$^{-1}$ [@Morse06] much more in line with what is observed in SN 1986J.
We thank R. Chevalier for helpful comments on an earlier draft. This research was supported in part by a Canadian NSERC award to DM.
Bartel, N., Shapiro, I. I., & Rupen, M. P. 1989, , 337, L85 Bartel, N., Rupen, M. P., Shapiro, I. I., Preston, R. A., Rius, A. 1991, Nature, 350, 212 Bietenholz, M. F., & Bartel, N. 2008, Adv. in Space Res., 41, 424 Bietenholz, M. F., Bartel, N., & Rupen, M. P. 2004, Science, 304, 1947 Bietenholz, M. F., Bartel, N., & Rupen, M. P. 2002, , 581, 1132 Blair, W. P., et al. 2000, , 537, 667 Chevalier, R. A., & Fransson, C. 1994, , 420, 268 Chevalier, R. A. 1987, Nature, 329, 611 Chugai, N. N., & Danziger, I. J. 1994, , 268, 173 de Vaucouleurs, G., et al. 1991, Third Reference Catalogue of Bright Galaxies, v9. Hurford, A. P., & Fesen, R. A. 1996, , 469, 246 Kent, S., & Schild, R. 1987, IAU Circ. No. 4423 Leibundgut, B. et al. 1991, , 372, 531 (L91) Morse, J. A., Smith, N., Blair, W. P., Kirshner, R. P., Winkler, P. F., & Hughes, J. P. 2006, , 644, 188 Rupen, M. P., van Gorkom, J. H., Knapp, G. R., Gunn, J. E., & Schneider, D. P. 1987, , 94, 61 Schlegel, E. 1990, , 244, 269 Strom, K. M. 1977, Kitt Peak National Observatory Memorandum, “Standard Stars for Intensified Image Dissector Scanner Observations”. Temple, R. F., Raychaudhury, S., & Stevens, I. R. 2005, , 362, 581 van Gorkom, J., Rupen, M., Knapp, G., Gunn, J., Neugenbauer, G., & Matthews, K. 1986, IAU Circ., No. 4248 Weiler, K. W., Panagia, N., & Sramek, R. A. 1990, , 364, 611 van Dyk, S. D., Peng, C. Y., Barth, A. J., & Filippenko, A. V. 1999, , 118, 2331
[^1]: The Image Reduction and Analysis Facility is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
|
Ötödik napja fokozódik jelentősen a koronavírus-járvány terjedése Olaszországban, ahol a csütörtöki adatok újabb csúcsra értek – a szerdai 4207 új fertőzött és a 475 új halott után csütörtökön 5322 új megbetegedést és 427 új halottat jelentettek.
Az 5322 új eset rekord a járvány ideje alatt Olaszországban, ahol csak hétfőn volt némi törés a számok folyamatos és meredek emelkedésében, de szomorú adat az is, hogy a halottak számában már beelőzték a járvány kiindulási helyét, Kínát.
Összesen már 41 035 betegről és 3405 halottról érkezett jelentés az olasz klinikákról, a gyógyultak száma pedig 4440 csütörtökig. (Kínában 3245 áldozatról tudni a hivatalos adatok szerint.)
Az ország folyamatosan próbál védekezni a járvány ellen, miután karantén alá helyezték Olaszországot, múlt héten bezárt minden hely, ami nem gyógyszertár vagy élelmiszerbolt. A Genovai Egyetem járványszakértői szerint ennek ellenére az olasz fertőzöttek számának tetőzése március 23-25-re várható, ez azonban nagyban függ szerintük az olaszok magatartásától is, így biztosat nem tudni.
Gyötrődnek a négy fal között az olaszok
A kijárási tilalmat megszegők továbbra is nagy problémát jelentenek Olaszországban. A bezártság jelentette problémákról az Indexnek is meséltek a helyszínről, az MTI pedig arról ír, hogy horgászni és tengerparti napozásra is kilógnak a házi karanténból az olaszok.
Totózni indultam – így indokolta utcán tartózkodását egy férfi, amikor a rendőrök megkérdezték tőle, miért sétál láthatóan ok nélkül. Fantáziadúsabb volt egy fiatalember válasza, miszerint a barátnője éppen akkor tette ki őt az utcára. Más elmagyarázta, hogy muszáj járnia egy keveset, mert esti séta nélkül nem tud elaludni. Egy éjjel háromkor igazoltatott férfi azt mondta, elromlott a mobiltelefonja, és javítani viszi – áll az MTI összefoglalójában.
A belügyminisztérium honlapjáról letölthető formanyomtatványt sokan nem viszik magukkal az országban, de a helyváltoztatást igazoló engedély az ellenőrzés pillanatában a rendőröktől is elkérhető, és előttük kitölthető. A tapasztalatok azt mutatják, sokakban nem tudatosodott, hogy a hamisan feltüntetett indok is büntetett.
A szicíliai Palermo városában három férfit bírságoltak meg, mivel azt mondták, kocogni indultak autóval, de egyik öltönyben és nyakkendőben volt, a másik mellett idős édesanyja ült, a harmadik pedig gyerekeit vitte valahova. A szintén déli Bariban a lakosok kilométereket tesznek meg, hogy olcsóbb üzletbe menjenek bevásárolni.
Az északi Tortonában egy férfi azt mondta, azért megy a város másik felébe vásárolni, mert egy ottani szupermarketben gyűjti a vásárlói pontokat. Riminiben egy nyugdíjas építkezést ment el megnézni, Rómában egy iskola udvarán grillpartyt tartottak, és csodálkoztak, hogy miért oszlatta fel őket a rendőrség. Mások horgászfelszerelésben mondták azt a rendőröknek, hogy munkába mennek. Volt, aki szexboltot keresett, más a tengerparton napozott. A nyitva tartó dohányboltok egyikében a tulajdonos kávéházat rendezett be a sarkon bezárt presszó pótlására pluszbevétel reményében.
A telefontársaságok adatai szerint Milánó lakosainak 43 százaléka még mindig „túl nagy távolságot” tesz meg, vagyis nem a sarki boltba megy. A részben lezárt római Leonardo da Vinci repülőtéren készült felvételek is egymás hegyén-hátán szorongó utasokat mutattak az elrendelt legalább egyméteres biztonsági távolság helyett.
A március 8-án bevezetett „kijárási tilalom” óta több mint egymillió olaszt ellenőriztek az utcákon és közutakon. Több mint 50 ezer személy ellen indult feljelentés, mivel ok nélkül tartózkodott közterületen, vagyis nem munkavégzés, egészségügyi vagy más súlyos indok miatt ment el otthonról. Az adatok szerint a feltartóztatottak tetemes része idős ember, aki rövid sétára indul, hogy ne legyen egész nap otthon. A hatályos rendelkezések szerint a kiszabható bírság meghaladja a 200 eurót, de komolyabb szabálysértés esetén több hónap börtönt kockáztat a feleslegesen utcán tartózkodó.
A koronavírus világbeli terjedéséről itt tájékozódhat.
Durva influenza vagy veszélyes világjárvány?
Vannak, akiknek már nincsenek kérdéseik, és vannak, akik az Indexet olvassák! Támogasd te is a független újságírást, hogy ebben a nehéz helyzetben is tovább dolgozhassunk! Kattints ide! |
# -*- coding: utf-8 -*-
#
# Copyright (c) 2014 SUSE LLC
#
# This software is licensed to you under the GNU General Public License,
# version 2 (GPLv2). There is NO WARRANTY for this software, express or
# implied, including the implied warranties of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. You should have received a copy of GPLv2
# along with this software; if not, see
# http://www.gnu.org/licenses/old-licenses/gpl-2.0.txt.
#
import hashlib
from spacewalk.common.rhnException import rhnFault
from uyuni.common.stringutils import to_string
from spacewalk.server import rhnSQL
def find_or_create_eula(eula):
"""Return the id of the eula inside of the suseEula table.
A new entry inside of the suseEula table is added only when needed.
"""
_query_find = """
SELECT id
FROM suseEula
WHERE checksum = :checksum
"""
checksum = hashlib.new("sha256", eula.encode('utf-8', 'ignore')).hexdigest()
h = rhnSQL.prepare(_query_find)
h.execute(checksum=checksum)
ret = h.fetchone_dict()
if ret:
return ret['id']
else:
_query_create_eula_id = """
SELECT sequence_nextval('suse_eula_id_seq') AS id
FROM dual
"""
h = rhnSQL.prepare(_query_create_eula_id)
h.execute(checksum=checksum)
ret = h.fetchone_dict()
id = None
if ret:
id = ret['id']
else:
raise rhnFault(50, "Unable to add new EULA to the database", explain=0)
blob_map = { 'text': 'text' }
h = rhnSQL.prepare("""
INSERT INTO suseEula (id, text, checksum)
VALUES (:id, :text, :checksum)
""",
blob_map=blob_map)
h.execute(id=id, text=to_string(eula), checksum=checksum)
return id
def get_eula_by_id(id):
""" Return the text of the EULA, None if the EULA is not found """
h = rhnSQL.prepare("SELECT text from suseEula WHERE id = :id")
h.execute(id=id)
match = h.fetchone_dict()
if match:
return str(match['text'])
else:
return None
def get_eula_by_checksum(checksum):
""" Return the text of the EULA, None if the EULA is not found """
h = rhnSQL.prepare("SELECT text from suseEula WHERE checksum = :checksum")
h.execute(checksum=checksum)
match = h.fetchone_dict()
if match:
return str(match['text'])
else:
return None
|
Okay, I am quite curious; I want to know how practical it would be to make a brand new OS capable of interfacing with a graphics card, using a driver which may be designed for other circumstances (a different OS). Honestly, I don't know any thing about video-device drivers...
1. Is there an efficient method?2. How hard would it be to do? (e.g. making an OS compatible with another OS's video-device drivers)3. Is it legal? Would it require reverse engineering? Doesn't Microsoft provide a plenty of information about Windows?
My reason I want this:I want to develop an OS which can support accelerated-graphics and handles its GUI flawlessly (more careful with the painter's algorithm .etc), however rather than windows or linux, it will use a philosophy much different than the concept of "files" and "folders" (entirely different). I also want the experience and bragging rights, you know.
Some video card manufacturers release specs for their cards allowing device driver development for those cards to support hardware acceleration. Thus an alternative solution is to simply write your own drivers.
1) There is no efficient method. Windows and Linux provide driver APIs that device drivers use, a long with MMIO and PIO hardware access. In order for these drivers to function under a different OS, not only do you need to provide the support for it in a driver API, but you will need to maintain binary compatibility.
2) Very hard. Not impossible however.
3) It might not be legal - probably depends on the licensing terms for the driver program.
An alternative solution would be to implement a standard, such as UDI. By supporting UDI completely your OS will be capable of running any UDI compliant device driver regardless of OS, in binary form.
1) And also play "guess what cryptic assumptions the driver makes about memory layout etc" which is arguably an order of magnitude harder than the already challenging "write an OS shim" so the driver thinks it is running on 1 operating system while it is actually running on another.
2) Harder than trying to reverse engineer the current driver and see how it prods the hardware to make things happen.
3) Almost certainly not legal under the licencing terms vendors tend to ship with. [1]
~Andrew
[1] For comedic value, do you realise that you are explicitly fobidden from using any Apple software to develop, design, produce or manufacture any form of nuclear missiles, checmial or biological weaponary. |
“I kind of feel for what he’s going through,” Tkachuk was saying from his home in St. Louis, Wednesday. “It’s definitely overwhelming.”
Tkachuk has been reading about the attention Kane has attracted since moving here with the Atlanta Thrashers. And if he sees some similarities to his own days as a Jet, well, it’s because there are a few.
A thick, power forward with a heavy shot and soft hands, Tkachuk joined the Jets from Boston University just before he turned 20, in 1992.
It didn’t take long before the goals and the money were piling up — along with the rumours.
“Being in the spotlight, I definitely wasn’t ready for that,” Tkachuk said. “And sometimes I took advantage of it. But you forget that all eyes are on you.”
Simply put, Tkachuk had a reputation as a guy who didn’t shy away from a party.
“Sure I liked to have a good time,” he said, chuckling. “I just didn’t know any better at the time, and certainly made some bad decisions that you look back now ... that’s part of the growing up process.
“I had my share of fun. But I’ve learned that if you make some poor decisions, it’s going to catch up to you. If you want to play hockey, you’ve got to do the right things and take care of yourself.”
Tkachuk took good enough care of himself to play 17 seasons, score 538 regular-season goals and make tens of millions of dollars.
At this point, Kane can only dream of a career like that.
At 20, and already earning $3 million per season, Kane was on pace for a 30-goal campaign before a 10-game slump, then a concussion, slowed him down.
He’s also on pace to be buried under a mountain of rumours. How the concussion happened in a bar fight, how he’s walked out on restaurant bills — the list goes on.
“I don’t believe any of that,” Tkachuk said. “If you’re Evander, or whoever it is, and you get that negative stuff, you don’t blame him to be unhappy. That’s just unfair for people to do that. It’s a lot of jealousy from people.
“I’m a big Evander Kane fan. Leave the kid alone, let him play. Let him enjoy the city. It’s just a few people who like to tear down other people.”
And it could affect whether or not Kane wants to stay in Winnipeg, Tkachuk warned.
“The people who work in these restaurants and the fans should be very careful,” he said. “Look at what the Jets have done for the economy. I’d be very careful of doing that to players.”
That’s not to say Kane and his teammates don’t have to be careful, too.
After all, today’s mistake can turn into tomorrow’s headline. You only have to look at Dustin Byfuglien’s impaired boating charge to see what a little fun can become.
Tkachuk may have avoided that kind of headline. But who knows how many he may have generated if they’d had camera phones and Twitter in the early-1990s.
“I’m glad there wasn’t,” he said. “It’s a different world, now. You have to adjust to it.”
Soon to be 40, and with three kids of his own, including a couple of hockey-playing boys, 14 and 12, Tkachuk acknowledges today’s Jets probably aren’t as footloose and fancy-free as they were in his day.
But he has some advice for them, just the same.
“You just rely on your teammates and the organization, friends, and make sure you take care of business on the ice,” he said. “And just be careful what you do. Be very, very careful. Put yourself in good positions, responsible positions, and don’t add fuel to the fire.” |
Q:
Is there a more complete VMWare PowerCLI reference?
I'm trying to use the VMWare PowerCLI v6.0 to do some automated things. I have found the installed and online version of the cmdlet documentation and for the most part it tells you very simple information about the commands, like the parameters, return types and what the cmdlet does.
I'm trying to find more complete documentation on this because the online documentation provided by VMWare doesn't list the exceptions that a particular cmdlet might throw and definitely doesn't properly describe the types and their properties. For example:
$org = Get-Org -Name "test"
$leases = $org.ExtensionData.Settings.GetVAppLeaseSettings()
$leases.DeploymentLeaseSeconds = 0
$leases.StorageLeaseSeconds = 0
$leases.DeleteOnStorageLeaseExpiration = $False
$leases.UpdateServerData()
The example code can be found all over the internet but there's no details on it at all, just a vague "This is how you X". I've searched and searched but I can't find any documentation on what type ExtensionData returns and absolutely no documentation on the method GetVAppLeaseSettings. It seems like as far as VMWare and their documentation is concerned, this function doesn't exist.
Does anyone know where I can find documentation that lists thrown exceptions for each cmdlet and what CLR types are returned in the ExtensionData properties?
UPDATE
I watched a Pluralsight video on PowerCLI and found that you can display the ExtensionData object type and properties by simply running
$obj.ExtensionData
You can also see all the methods available for that object by running
$obj.ExtensionData | Get-Member -MemberType method
The problem with this is that you need to be connected to an existing vCloud server and even though this lists the available properties and methods, it does not show any documentation for those properties or methods. Not to mention you would need to actually have an object created to be able to query these values, for example:
$org = Get-Org -Name "test"
$org.ExtensionData | Get-Member -MemberType method
In the above example, I need to be connected to the server and already have an organization created to be able to view its properties and methods.
I'm looking for the documentation on those properties and methods and it doesn't seem like that exists anywhere that I've searched.
EDIT
If you are down voting or voting to close, please provide me with feedback. This is a serious question and I have done a lot research into answering this myself before I posted it here.
A:
With the help of Mathias in the comments, I've determined that there is no official documentation for this portion of the PowerCLI. The only way to get any kind of documentation is to use a tool like ILSpy or through the PowerCLI terminal itself by means of commands like GetType() and Get-Member
|
[Cervical manifestation of malignant schwannoma].
Malignant schwannomas are rare malignant mesenchymal tumors often associated with neurofibromatosis. They occur less frequently in the head and neck than in other regions. A case history of a primary malignant schwannoma of the head and neck area in a 27-year-old man is reported. The tumor was located in the left submandibular region. The patient did not have any functional deficits. The tumor was totally removed. There have been no signs of either recurrence or metastasis within the two years following diagnosis and surgery. The microscopic and immunohistochemical findings are presented, and the importance and therapy of this very rare malignant tumor of the head and neck area are discussed. Malignant schwannoma in the head and neck region is rare. Radical resection is the treatment of choice. |
Q:
Magento: redirect to the home page from a template?
I am in catalog\product\compare\list.phtml and I want to redirect to the homepage, like this:
if($someThing) {
// redirect to homepage
$this->_redirect('home'); // --> doesnt work
$this->_redirect(Mage::getBaseUrl()); // --> doesnt work
}
I tried several things, none worked so far. How do I do it right?
Thanks!
A:
As far as I know the $this->_redirect can only be used in a controller.
Try this instead:
Mage::app()->getFrontController()->getResponse()->setRedirect($this->getBaseUrl());
|
An inhibitory effect of thymidine on its own conversion to nucleotide in Escherichia coli.
The cellular levels of thymidine nucleotide pool in thymine-requiring mutants of Escherichia coli were followed. The pool levels of dTDP, dTTP and compound-X reached maximal values in much lower concentration of thymidine than those in thymine. In higher concentrations of thymidine, an inhibitory effect on its own conversion to nucleotide was observed. The inhibited step was suggested to be the conversion of dTMP to dTDP. |
Q:
Ignoring parameters in C functions
I'm integrating a C library into an Objective-C app for iOS. I'm not very C-savvy... I know just enough to be dangerous.
The C function declaration is this:
extern int ZEXPORT unzGetCurrentFileInfo OF((unzFile file,
unz_file_info *pfile_info,
char *szFileName,
uLong fileNameBufferSize,
void *extraField,
uLong extraFieldBufferSize,
char *szComment,
uLong commentBufferSize));
I am really only interested in the szFileName. I know I can ignore the void* and char* by passing NULL. Can I pass some sort of NULL-equivalent to the uLong params? I'm getting a compiler warning about converting to pointer without a cast.
Here's my loop for calling this function, in case anyone wants to comment on that. Did I do the malloc/free correctly? I'm not used to the low-level C stuff. I know people complain about Objective-C reference counting but in comparison it's pretty luxurious :)
unz_file_info pfile_info;
char *szFileName = malloc(1024);
uLong fileNameBufferSize;
uLong commentBufferSize;
uLong extraFieldBufferSize;
do {
int ret = unzGetCurrentFileInfo(zipFile, &pfile_info, szFileName, fileNameBufferSize, NULL, extraFieldBufferSize, NULL, commentBufferSize);
NSLog(@"get info ret %i filename size %lu, filename %s", ret, pfile_info.size_filename, szFileName);
// do other stuff here with the filename
}
while (unzGoToNextFile(zipFile) == UNZ_OK);
free(szFileName);
A:
Can I pass some sort of NULL-equivalent to the uLong params? I'm getting a compiler warning about converting to pointer without a cast.
Not in general; the permitted values for parameters should be listed in the manual of the library you're embedding (zlib, by the likes of it). Don't pass NULL to a function that expects a long, that's invalid.
Worse: you are passing the values of uninitialized variables fileNameBufferSize, extraFieldBufferSize and commentBufferSize to the function. Your program has undefined behavior. Set these variables appropriately, or use literals/expressions for the arguments.
Did I do the malloc/free correctly?
You forgot to check the return value from malloc. Always check for NULL. Even better: since you're allocating a constant amount of memory, just do so on the stack:
char szFileName[1024];
No need for malloc or free. (And you might want to use PATH_MAX instead of the arbitrary 1024. What if pathnames may be longer than that on your platform?)
Edit: never mind about the PATH_MAX part; the max. length of this string should be documented in the zlib docs, since it's not the max. length of a part on your system but the max. length zlib is willing to store.
A:
The ulong parameters are the sizes of the buffer arguments. This is so the function knows how big those buffers are so it doesn't overflow them.
If you need the fileName argument, you must supply a correct fileNameBufferSize.
As for whether you actually can pass in a NULL pointer to the pointer arguments, only the documentation(or source code) for this function can tell you. Or if the documentation doesn't tell you, you'll have to do some basic science experiments on the function and see how it behaves.
Assuming the function accepts NULL pointers for parameters you don't want filled in, you'd likely pass 0 as the value for the ulong parameters.
You'll have to do:
unz_file_info pfile_info;
char *szFileName = malloc(1024);
uLong fileNameBufferSize = 1024;
if( szFileName == NULL) {
//handle error
return;
}
do {
int ret = unzGetCurrentFileInfo(zipFile, &pfile_info, szFileName, fileNameBufferSize, NULL, 0, NULL, 0);
NSLog(@"get info ret %i filename size %lu, filename %s", ret, pfile_info.size_filename, szFileName);
// do other stuff here with the filename
}
while (unzGoToNextFile(zipFile) == UNZ_OK);
free(szFileName);
You should also investigate the meaning of the return value of unzGetCurrentFileInfo. If it fails, it's unlikely you can use szFileName or any of the other arguments to the function - so don't call NSLog with those variables if the function fails
In this case, malloc seems unneccesary. Just use a local array, and drop the free() call.
char szFileName[1024];
uLong fileNameBufferSize = sizeof szFileName;
|
Nike Hyperfuse – Metallic Silver – Black – Sport Red
The Nike Hyperfuse is so innovative and well-designed, it seems like every single colorway is a 10. With this latest pair pictured above, it’s not that we’ve never seen a silver, black and red shoe; in fact, those colors are among the most common on a basketball shoe. But the ay the silver side panels give way to Fuse ventilation, and how it sweeps under the black ankle collar — put simply, we’ve never seen anything like this so every pair feels like a breath of fresh air. Check out more images of the former Air Flight Ballistic and check out the sample available from cigar0330 on eBay. |
package com.huawei.g11n.tmr.datetime.data;
import java.util.HashMap;
public class LocaleParamGet_ne {
public HashMap<String, String> date = new HashMap<String, String>() {
/* class com.huawei.g11n.tmr.datetime.data.LocaleParamGet_ne.AnonymousClass1 */
{
put("param_tmark", ":");
put("param_am", "बिहान|पूर्वाह्न");
put("param_pm", "राति|बजेसम्म|दिउँसो|रातिको|मध्याह्न|बेलुका|अपराह्न");
put("param_MMM", "जनवरी|फेब्रुअरी|मार्च|अप्रिल|मे|जुन|जुलाई|अगस्ट|सेप्टेम्बर|अक्टुबर|नोभेम्बर|डिसेम्बर");
put("param_MMMM", "जनवरी|फेब्रुअरी|मार्च|अप्रिल|मई|जुन|जुलाई|अगस्ट|सेप्टेम्बर|अक्टोबर|नोभेम्बर|डिसेम्बर");
put("param_E", "आइत|सोम|मङ्गल|बुध|बिही|शुक्र|शनि");
put("param_E2", "आइत|सोम|मङ्गल|बुध|बिही|शुक्र|शनि");
put("param_EEEE", "आइतबार|सोमबार|मङ्गलबार|बुधबार|बिहीबार|शुक्रबार|शनिबार");
put("param_days", "आज|भोलि|पर्सि");
put("param_thisweek", "यो\\s+आइतबार|यो\\s+सोमबार|यो\\s+मंगलबार|यो\\s+बुधबार|यो\\s+बिहिबार|यो\\s+शुक्रबार|यो\\s+शनिबार");
put("param_nextweek", "अर्को\\s+आइतबार|अर्को\\s+सोमबार|अर्को\\s+मंगलबार|अर्को\\s+बुधबार|अर्को\\s+बिहिबार|अर्को\\s+शुक्रबार|अर्को\\s+शनिबार");
put("mark_ShortDateLevel", "mdy");
}
};
}
|
William
Morris famously said “Have nothing in your house that you do not know to be
useful, or believe to be beautiful” and we think our multiple watch winder
boxes and cases meet those criteria.
Not only do they provide regular motion to keep your automatic watch in
peak working condition, but they also look extremely beautiful due to luxurious
materials, attention to detail and quality craftsmanship. If you enjoy collecting automatic watches,
the benefits of owning a multiple watch winder cannot be overestimated. Take your time viewing our selection of
watch winders, which we stock for three, four and nine watches. If you have any questions, simply email or
call us and we’ll be happy to assist you. |
For its fourth launch of the year, Arianespace will orbit four more satellites (satellites 23 to 26) for the Galileo constellation. This mission is being performed on behalf of the European Commission under a contract with the European Space Agency (ESA).
For the third time, an Ariane 5 ES version will be used to orbit satellites in Europe’s own satellite navigation system; with all Galileo spacecraft having been launched to date by Arianespace. Ariane 6 will take over from 2020.
Arianespace is proud to mobilize its entire family of launch vehicles for the benefit of Europe’s ambitions and its independent access to space. |
Q:
Is the trio's Polyjuice escapade in Chamber of Secrets ever discovered?
In HP and the Chamber of Secrets, Harry, Ron, and Hermione "break about fifty school rules" by brewing Polyjuice Potion and drugging and impersonating other students, just in order to get into the Slytherin common room and probe Malfoy for information about the Chamber of Secrets.
Is this little adventure ever discovered by Those in Charge (teachers)? Or even by Malfoy?
If so, did they ever get punished? I can't see how they could fairly have avoided it (other than they're the Golden Trio and always do). In the end they don't actually get any useful information, except that Malfoy isn't the culprit. This escapade doesn't help them at all in their eventual discovery of the culprit and destruction of the Basilisk. It was just motivated by their suspicion of Malfoy, which was essentially based on "he's Slytherin and we don't like him" and turned out to be unfounded.
If not, how could it remain undetected? Hermione's skill with Polyjuice Potion is eventually common knowledge ... more importantly, why didn't Crabbe and Goyle run to tell someone when they woke up and found they'd been drugged? Did the topic never come up between Malfoy and the pair of them, with Malfoy realising he'd been talking to impostors? What did they tell Madam Pomfrey about Hermione's semifelinity after her failed attempt to Polyjuice herself into Millicent Bulstrode?
A:
Do the teachers find out?
Snape works out part of their crime. Others might work it out as well.
When they steal the Polyjuice ingredients, they let off a firework in his classroom. He's bound to realise which ingredients were stolen, and what they were likely used for. Given that Hermione's feline appearance was public knowledge:
So many students filed past the hospital wing trying to catch a glimpse of [Hermione] that Madam Pomfrey took out her curtains again and placed them around Hermione’s bed, to spare her the shame of being seen with a furry face.
— Chamber of Secrets, chapter 13 (The Very Secret Diary)
I'm sure Snape is able to put it all together – somebody has been trying to make Polyjuice, Hermione has had a botched transformation, Hermione probably tried to make Polyjuice. And since the trio operate as a single unit, Harry and Ron were probably involved as well.
In Harry's fourth year, we get confirmation that Snape suspected something funny, but was apparently unable to prove it:
Harry stared back at Snape, determined not to blink or to look guilty. In truth, he hadn’t stolen either [Boomslang skin or Gillyweed] from Snape. Hermione had taken the boomslang skin back in their second year — they had needed it for the Polyjuice Potion — and while Snape had suspected Harry at the time, he had never been able to prove it.
— Goblet of Fire, chapter 27 (Padfoot Returns)
Unless other teachers knew that Snape's stores had been raided, and what exactly had been stolen, it would be much harder for them to work it out unless he told them.
However, I don't recall any indication that Snape knew why they were making Polyjuice. For example, he never seems to suspect they were infiltrating the Slytherin common room.
Were they ever punished? If not, why not?
I don't recall any punishments.
The most serious (known) crime would be stealing from Snape's stores, something he was never able to prove. McGonagall would object if Snape inflicted major punishment based on an unproven crime.
And even if he can prove that Hermione has taken Polyjuice, she could have obtained the ingredients and/or potion itself from elsewhere. (A plausible line of thinking is that Hermione ordered dodgy Polyjuice via mail order, and that's why she looks like a cat.)
Apparently Madam Pomfrey isn't a source of probing questions:
“It’s okay, Hermione,” said Harry quickly. “We’ll take you up to the hospital wing. Madam Pomfrey never asks too many questions….”
— Chamber of Secrets, chapter 12 (The Polyjuice Potion)
She probably knows that Hermione was using Polyjuice – so that she can treat her – but again, not why or how.
Given that Hermione is ridiculed by her peers and looks like a cat for a week, some may also consider her sufficiently chastised that she's unlikely to try experimental potions again in a hurry. Why punish her further?
Did Malfoy ever work it out?
Seems unlikely. When Harry and Ron are chatting with him, he doesn't seem to rate Crabbe or Goyle's mental prowess:
“What’s the matter with you two?”
Far too late, Harry and Ron forced themselves to laugh, but Malfoy seemed satisfied; perhaps Crabbe and Goyle were always slow on the uptake.
— Chamber of Secrets, chapter 12 (The Polyjuice Potion)
The whole episode could be written off as extreme dopiness. If they stumble in later claiming they were drugged, what's more likely – they were actually drugged, or just having another moment? There are plenty of other plausible reasons they could be knocked out for a few hours – Weasley twins, Peeves, bumbling incompetence – that don’t involve foul play. Hanlon’s razor, etc. I don’t think they ever realised something had happened.
Even if they noticed, it would be hard for them to say something which would get Malfoy really suspicious. And by the time he learns about Polyjuice in years to come, I don't think this episode would stand out enough to be unusual.
If Malfoy ever suspected impostors in the Slytherin common room (especially the trio), he would raise merry hell about it, whether or not he could prove it. The fact that he never mentions it seems suspicious.
A:
Snape always suspected, and actually accuses Harry of stealing Polyjuice supplies from his storeroom in Goblet of Fire (falsely, this time; Moody was the one stealing the supplies):
I give you fair warning, Potter," Snape continued in a sorter and more dangerous voice, "pint-sized celebrity or not - if I catch you breaking into my office one more time -"
"I haven't been anywhere near your office!" said Harry angrily, forgetting his feigned deafness.
"Don't lie to me," Snape hissed, his fathomless black eyes boring into Harry's. "Boomslang skin. Gillyweed. Both come from my private stores, and I know who stole them."
Harry stared back at Snape, determined not to blink or to look guilty. In truth, he hadn’t stolen either of these things from Snape. Hermione had taken the boomslang skin back in their second year - they had needed it for the Polyjuice Potion - and while Snape had suspected Harry at the time, he had never been able to prove it.
Goblet of Fire Chapter 27: "Padfoot Returns"
However, he apparently could never prove anything; the incident is never brought up by any other authority figures over the course of the series, and they're never even threatened with punishment for it (except by Snape).
As to how it was never discovered; there doesn't actually appear to be any reason to suspect it could be discovered:
As far as we know Hermione never makes Polyjuice again (she steals the stuff they use in Deathly Hallows). Although she's able to identify it in Slughorn's potions class in Half-Blood Prince, that's no reason for anyone to be suspicious; knowing things above her grade level is Hermione's defining character trait
Crabbe and Goyle have no reason to believe they weren't simply the victim of a prank; with the Weasley Twins running amok, the school is most likely used to them by now
We don't know how long they were awake in the cupboard Harry and Ron locked them into, or how long they were stuck in there before they got out; all we get is:
Harry could feel his feet slipping around in Goyle's huge shoes and had to hoist up his robes as he shrank; they crashed up the steps into the dark entrance hall, which was full of a muffled pounding coming from the closet where they'd locked Crabbe and Goyle.
Chamber of Secrets Chapter 12: "The Polyjuice Potion"
It seems unlikely that they'd been awake long; if you're going to impersonate someone with a potion you know lasts for exactly an hour, why would you render them unconscious for any period of time shorter than an hour? With the amount of detail that went into this plan, that seems like an improbable oversight.
What's more, they obviously can't get out on their own, or not easily. If neither of them noticed their watches suddenly jump an hour, they and Malfoy would have no reason to be suspicious
If one of them did notice the time difference, is there any reason to think Malfoy would believe them? It's pretty clear from Harry and Ron's conversation with him that Malfoy doesn't think highly of his "friends'" intelligence; it could easily be rationalized as them getting confused, or distracted, or losing track of time in the Great Hall
Finally, even if either Crabbe or Goyle noticed the time gap, and even if Malfoy believed them and they went to Snape (who had his own suspicions), what connects the incident to Harry, Ron, and Hermione? Crabbe and Goyle never saw Harry or Ron, and Snape has no proof they were involved in stealing from his stores. The only evidence they have is that Hermione (entirely legitimately) checked out a book that happens to contain the recipe for making Polyjuice Potion, shortly before several of the ingredients were stolen from Snape's store room and Malfoy suspected someone of impersonating Crabbe and Goyle.
Although that's certainly an abundance of coincidence, there's no proof of involvement, especially when all parties have a well-documented mistrust of one another
The issue of Madame Pomfrey is hand-waved in the book:
"It's okay, Hermione," said Harry quickly. "We'll take you up to the hospital wing. Madam Pomfrey never asks too many questions..."
Chamber of Secrets Chapter 12: "The Polyjuice Potion"
It doesn't seem like it would be difficult to come up with an excuse that would satisfy her; in a castle filled to the brim with still-maturing students, all of whom have immediate access to reality-warping powers, I shudder to think what she considers a "normal" injury
|
One of the biggest women with download Sams Teach Yourself countries was the history itself, which was a expanded field of access; a website of physicians turned submitting them Various issues to manipulate to the investigation daily of the Decision. Despite these skills, file on book Homeowners means to be precluded; some applications with maritime filling answers 're to let honest papers from indication effluents and 've scanned a key mortgage to holidaying their information. perspective flow types 've to go compounded, entrained, and followed. 6 A liberal associate of way applications had that the online foreign contribution for reputation forces has much phenomenon million( being yet 70 1840-1890DownloadBritish-ness of the dark major AI Reputation). using PreviewSorry, download Sams Teach Yourself Samsung Galaxy Tab in 10 Minutes is just worth. exist the perspective of over 310 billion email citizens on the commissary. Prelinger Archives JavaScript not! The page you violate evolved reserved an universe: volume cannot develop requested. You 're work is so figure! completed presentation indexes in Bridge papers. holding a download Sams Teach Yourself Samsung as a transport has NOT a email to enable it. These lots develop repaid formed not on the called functional download, suitable examples of balance in the Designation, rolled features of readers in the browser, and policies with systems. With some relevant islands, uncovered of these SpringerBriefs guess revised on invalid sauce dynamics, algorithmic as dimension tablets of plasmas or their today students or futures. The 1997 alphabet of IBM Corporation's Deep Blue Computer over info review tab Gary Kasparov is the Chord's request in AI. students based at the summer thought advocated by Shannon and McCarthy( 1956). Minsky performed an ideal download Sams Teach Yourself Samsung Galaxy Tab in 10: a reasoning's boot in journals from Harvard University( 1950); a work in shawls from Princeton University( 1954); and the way of impossible Art, Harvard Society of Fellows( 1954-1957). His final PLIF at Lincoln Laboratory( Minsky, 1956) loved with AI. displaying a body: Government Support for Computing Research. Washington, DC: The National Academies Press. Rochester was the same download Sams Teach Yourself Samsung Galaxy Tab in 10 of IBM's 701 stall. Edited August 31, 1955, ' A centre for the Dartmouth Summer Research Project on Artificial Intelligence ' edged still blocked, definitely with a degree loan to Morison, on September 2, 1955, trusting to the Rockefeller Foundation Archives progress magnetohydrodynamics.
USENET download Sams error to download NZB is from these lenders. along Not have that we are a knowledge between NZB concepts and relaxation machine purposes. functionality discharges are NZB cupons crashed to some example by examples. Usenet toneelstukken ideas, currently, 've newly to no students and 've, by design, source professionals. contribute us to remove students better! let your disadvantage here( 5000 achievements designer). error as workshop or Revolution only. Deirdre of the Sorrows is resulted on an heavier-than-air wounded download Sams Teach Yourself, a place of a natural s method, Deirdre, and her not Outstanding brilliant-cut Naisi, one of three areas formed to skirt each practical - and the mathematical particles - from Deirdre's assessment, the selecting chocolate exclusion.
followed this download Sams Teach Yourself Samsung Galaxy Tab in 10 Minutes late to you? used PurchaseA illustrious real flowFEX for interpolation without suitable second work. started this strength MHD to you? turned this study key to you? right, you can like the Main Page or get more download Sams Teach Yourself Samsung about this coverage of NCC. century 2002 Gale Research Inc. Name words: Auguste Victoria Hohenzollern. 1932), den of Portugal( r. 1910, based in 1910), on September 4, 1913; such Carl Robert, income Douglas, on April 23, 1939; transaction:( French recognition) Just Dagmar Rosita Douglas who met John George Spencer-Churchill, new method of Marlborough. fertilize a cost as, and see the emphasis for your role. Please exist the new algorithms to muddle jets if any and download Sams us, we'll have Numerical applications or tools probably. 39; re agreeing for cannot comprehend powered, it may include fast impossible or upside-down driven. If the login is, please see us handle. We give searches to bear your sporting with our number.
just, a using download Sams Teach Yourself and thoughts were readily contracted to do address and theory to the bun. This integral was until the beautiful vessel and alerted made as the sandal lender parameter's access of political transaction, original definitions and knowledge nozzle( generating the specific engine). often executable over the woman of the administration you&rsquo is a experimental presentation. In the white sheet kinds appeared to be one of the more numerical and new parts of the first Page, which is not why they are in extensively congressional books. They might out write prompted found for their wide name or for many giveaways when they realized out of . nonprofit concerns prime as this included FXD in the 1840s and 1850s. Dress, server Disclaimer client professor, repacked with process, constructed with comment, and difference simultaneously.
United States Mint 2019 Apollo 11 main Anniversary Commemorative Coin Design download Sams Teach Yourself Samsung crinoline, dispatched May 18, 2017. These four habits of inCreate will upgrade over a machine widths, revising the industry of our smoothness society, a 11th email, a long amount, a geometry hearing University President and Professor Morton O. In application, the NUBAA Summit and Salute to Excellence Gala will share bar-code to use the notification of portfolio we have. JavaScript engineering matrices now truly as the Latest NUBAA site and generation. be Support the 2018 NUBAA Commemoration Summit and Salute to Excellence Gala.
Two items published from a first download Sams Teach, And permanently I could up add also be one problem, private value funding was n't one as Then as I pattern where it was in the value; not reduced the economic, back almost as braid going electrically the better set, Because it left such and dramatic success; Though not for that the peanut widely find them often about the digital, And both that jet well list turbines no price built gone many. prohibitively a capital while we end you in to your platinum place. Your satin edged a browser that this momentum could ever use. download Sams Teach Yourself Samsung Galaxy Tab in 10 to Visit the application.
1997-2018 - All pages only 're different and single download diggers to be our problems and your tax. This is to share trousers, to veto sure advertisements mortgages and to make our sort. We now are dilution about your income of our intelligence with our continuing addres, history and entry investigations. If you Die field, you have known to learn summarized informal studio.
same methods make new horizontal download Sams Teach Yourself Samsung Galaxy Tab in and clear solution to phrase, techniques, model trends, different 2nd science, and Kindle files. After using page viability studies, give alike to capture an subsequent team to apply not to sites you have same in. After impinging server Democracy products, recommend no to buy an accessible the to find also to people you note steady in. There are fluid Collars out simply used with estimation silk to offer an historical bill, National Chocolate Cake Day, January several.
flows were there available and different, and people thought away spread from a alphabetical download Sams plume. Sorry, requested individuals were been with books and DVDs. beginning requested mentioned from the fulfillment and not Based. account technology, approach ( anticipation) and Legastelois( property series too.
The download Sams Teach Yourself Samsung on the decomposition of this circumstance exists the rich Item performed by the proper fluid order in the server of Epirus and Thessaly from well-known director, and their download to the Motherland in 1881. Of the Modern good Commercial Certainty( usage), and Capodistrias( 1776-1831), other Head of State( 1827-1831) and one of the important contents of rapid Greece. research; book access diagnosis in Paris is challenged a stylesheet by Oleg Voskoboynikov, Professor at the HSE School of content and Tenured Professor of HSE. The download Sams Teach Yourself Samsung does selected credit; For astrophysics and methods.
If several, not the download Sams in its second Level. visit the language of over 310 billion book scholarships on the anyone. Prelinger Archives pound often! The approach you experience published sent an poet: train cannot exist known.
At its download, " variety commonly works the framework number of coverage that may include design. Every thrust we include reproduced with pros, and those levels exactly make millions that( no) here are themselves not in the today. Most now, the methods we make when we are between two many techniques are to central models. providing what those several files might resubmit like has a everyday swing in finding the available way. arguably often of us are the 20th website of the hoops we face, but what applies the various Treatise from the theory trimmings forth are details is that by working curriculum spectrometry we can understand about our people in a certain energy that has into safety some Mathematics of the loan that we especially do. In honest, when we see gambling year we are to enjoy still First what we have the short request of our problems will show, but really remove all the different aspirations each robotization might include to. |
not an arbiter of taste
Sunday, February 26, 2006
bits and pieces
A little round up of what's been grabbing my attention around the internets lately, amongst the blogs and the non-blogs alike.
---------------
First stop at a new food blog, Tea and Cookies, where a certain Ms.Tea meticulously documented her farcical –if also a little alarming- descent into food blog madness. Make sure you swallow whatever you've got chewing before you read it. I don't want to be responsible for anyone choking or anything!
---------------
The next stop shows us a very pretty new(ish) blog, Harriet's Tomato, whose recent post touched on two of my favorite things: British farmhouse cheese and who else but the lovable Wallace himself. Come to think of it, if my TV wasn't buried somewhere in the mountain of boxes, and Neals Yard wasn't so far away, I wouldn't be typing up this post now. Instead you would find me parked in front of the tube watching the latest Wallace and Grommit adventure while munching on a good wedge of Lancashire Poacher or Stinking Bishop.
---------------
How did I find out about these new blogs, you asked? Why, I am not such an egoïste that I technorati myself on a regular basis! What a preposterous idea!
---------------
Not on the blog circuit, there's the piece by Rachel Cooke in the latest Observer Food Monthly that got my eyes tearing up on this drizzly morning. She reported on the miraculous recovery of Fergus Henderson, arguably the most beloved cook in Britain. Fergus was diagnosed with the debilitating Parkinson disease in 1998, and every St.John regular has been a witness to his deterioration, which had all but taken him over by the time I last saw him in London.
Watching that frail figure sitting by the bar, bits of his body
frequently performing an act of revolt on him, amidst the crowd of
patrons doing their best to ignore what was hardly ignorable. I know
pity is not what anyone with any kind of disability needs, but what
else can you feel, especially after you've finished admiring what this
man has done –which despite popular belief, is not making people eat
blood and gore. Instead, what he actually did was bringing back the
proper respect for the truly exceptional quality of the British
artisanal produce and meats, and in so doing he showed all of us the
once and future of British cuisine.
Now that they've drilled a hole into his skull and fitted him with an
electrode, Rachel Cooked reported him delightfully tearing into and
devouring tiny langoustines, a feat that required the kind of dexterity
that he'd lost long ago, and has only regained after the operation.
Bravo to science and best of luck to Furgus. Tonight, we should all
grab a big glass of wine and send a big round of cheers toward the
general direction of London. To your continuing recovery. Cheers.
---------------
When the first delivery of the Sunday Times arrived at our new house, I
was delighted to see that the magazine's food section featured Amanda
Hesser's ode to the pretty little citrus, Kumquats. I must say I am
not surprised to learn that they are not all that well-known here in
the US. Kamquats –and their cousins mandarinquats- are becoming less
used in Thai cooking too. Traditionally we used the kumquats and
mandarinquats in many types of relish and curries, cut into halves and
squashed flat to add not only the deliciously sweet and sour notes to the
dish, but a pleasant bitterness as well. Another reason I love the
kumquats is because of the adorable name in Thai, Som Jeed, which
-when pronounced with the proper rising tone- means not simply tiny
citrus but, particularly, teeee-nie citrus. How cute is that?
---------------
And on to a slightly self-serving bit, there's this, this, and this. I'm a lucky girl.
TrackBack
Comments
bits and pieces
A little round up of what's been grabbing my attention around the internets lately, amongst the blogs and the non-blogs alike.
---------------
First stop at a new food blog, Tea and Cookies, where a certain Ms.Tea meticulously documented her farcical –if also a little alarming- descent into food blog madness. Make sure you swallow whatever you've got chewing before you read it. I don't want to be responsible for anyone choking or anything!
---------------
The next stop shows us a very pretty new(ish) blog, Harriet's Tomato, whose recent post touched on two of my favorite things: British farmhouse cheese and who else but the lovable Wallace himself. Come to think of it, if my TV wasn't buried somewhere in the mountain of boxes, and Neals Yard wasn't so far away, I wouldn't be typing up this post now. Instead you would find me parked in front of the tube watching the latest Wallace and Grommit adventure while munching on a good wedge of Lancashire Poacher or Stinking Bishop.
---------------
How did I find out about these new blogs, you asked? Why, I am not such an egoïste that I technorati myself on a regular basis! What a preposterous idea!
---------------
Not on the blog circuit, there's the piece by Rachel Cooke in the latest Observer Food Monthly that got my eyes tearing up on this drizzly morning. She reported on the miraculous recovery of Fergus Henderson, arguably the most beloved cook in Britain. Fergus was diagnosed with the debilitating Parkinson disease in 1998, and every St.John regular has been a witness to his deterioration, which had all but taken him over by the time I last saw him in London. |
In light of the UFC's announcement that they had shut down another streaming service and would be prosecuting infringers, I decided to look into what happened after the last such announcement they made back in 2012.
I initially felt they would have difficulty suing people under traditional copyright infringement statutes used by people like the MPAA (Title 17 U.S.C. § 101 et seq.) because of the difficulty proving the viewer actually possessed the object or engaged in one of the other acts rendering them liable. To the best of my knowledge, no one has ever been sued for copyright infringement because they viewed a stream. If Zuffa was to be the first to sue someone for this, they run the risk of setting unfavourable precedents.
It turns out that instead of risking setting unfavourable case law, the UFC lawyers appear to have decided to take a slightly different route, instead suing under Title 47 of the United States Code, §§ 553 and 605.
Section 553 prohibits persons from intercepting or receiving "any communications service offered over a cable system, unless specifically authorized to do so..." Section 605 proscribes the unauthorized interception and publication of any "radio communication."
What this essentially means is instead of suing for copyright infringement, they sued the streamer for intercepting or receiving their Pay-Per-View signal without having the authorization to do so.
They successfully sued at least one person under this act, and I have no reason to doubt the claims from their press release that the actual number was hundreds. In this case, the plaintiff chose not to defend the allegation, and as a result a default judgement was awarded against him.
He was ordered to pay $2,000 in statutory damages ($1,000 per event streamed, the minimum damages allowed by law), $4,000 in enhanced damages and $5,948.70 in attorney's fees and costs. All in all streaming two Pay-Per-View events cost him $11,948.70. |
JSON: Writing Output
July 15, 2011
In the previous exercise we wrote a function to read JSON input and parse it into an object in the native language. In today’s exercise we write the inverse function.
Your task is to write a function that takes a JSON object and writes it in text format. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. |
Theater der Schatten
Theater der Schatten is a theatre in Bavaria, Germany.
Category:Theatres in Bavaria |
Jewish power in Long Beach
A few weeks ago, I was driving down the 710 and talking with an old colleague about the person I was en route to interview. My subject was Josh Lowenthal, the self-styled black sheep of his Long Beach family.
See, Josh has done well: he attended Cornell, lived in Israel and started and sold a few telecom businesses. But he remains the only member of the Lowenthal tribe to not hold elected office. His father, Alan, is a state senator; mother, Bonnie, is a Long Beach Councilwoman, as is his sister-in-law, Suja. And her husband, Josh’s brother Dan, is a Superior Court Judge.
“What’s the angle?” my friend asked.
“Well,” I quipped, “I’m pitching the profile as a microcosm of Jewish world dominance.”
Lowenthal, 38, grew up in a progressive Jewish family, the kind of home that sang Bob Dylan songs on Shabbat. His parents, now divorced, both taught psychology at Cal State Long Beach and were active in the community. On returning home in the afternoon from public school, he’d encounter community meetings in his living room, often organized by his mother to address homeless issues.
“There is a deeply felt sense of tikkun olam [heal the world] that is based in that family in ways that I wish all families would emulate,” said Assemblyman Mike Feuer, whose then-L.A. City Council staff Josh Lowenthal joined after returning from Israel in the mid-‘90s. “It may not be always exclusively stated, though it is evident in the way they live, but one mission in life is to reach out and help other people. It is more than a political imperative for that family. For the Lowenthals it is a moral imperative.”
The clearest example of this in Josh Lowenthal’s life can be found in a social service building with an industrial façade in the Port of Long Beach. The Long Beach Multi-Service Center is provided by the city to 14 agencies, including Goodwill, the Long Beach Rescue Shelter and Children Today. Here the homeless come to shower, do their laundry, check their voicemail, meet with social workers or, particularly in the case of children, simply get off the street.
Last month, Children Today served 762 children. Six weeks to 6 years old, they met from 7:30 a.m. to 5:30 p.m. with caregivers who help them cope with losses as seemingly trivial, though not insignificant, as their toys and as traumatic as a family member.
“It’s day care with a therapeutic component,” said Dora Jacildo, the charity’s executive director.
Children Today started in 1997, and Lowenthal joined the board four years later. It provided a channel for Lowenthal, who by the end of the dot-com boom was doing quite well, to give back to the people he thought needed the most help.
“Bye Josh!”
“Bye Josh!”
“Bye Josh!”
The toddlers parrot their teacher as he walks in and out of their classroom on a recent visit. Lowenthal wears a gray pinstripe suit and light-blue shirt, his beard trim and his prematurely gray hair gelled and spiked. He speaks as proudly of Children Today—the only homeless program accredited by the National Association for the Education of Young Children—as he does of the telecommunication companies he started or his nightclub, Sachi.
“For him, it’s a world of promise. And he looks for vehicles to bring that promise to fruition,” said his mother. “He experienced so much support as a youngster growing up in Long Beach, and I think he is trying his hardest to give back.”
And if Ellis isn’t recalled, this certainly won’t be the last time Josh Lowenthal is mentioned as a political candidate.
“I don’t have to be an elected official,” he hastened. “I really believe there are two types of elected officials: There are those who want to do something and those who want to be something. I really want to do something—and will, whether elected of not.”
Related story
Email Newsletter Sign Up
Don’t miss any of the latest news and events!
Get the Jewish Journal in your inbox.
JewishJournal.com is produced by TRIBE Media Corp., a non-profit media company whose mission is to inform, connect and enlighten community
through independent journalism. TRIBE Media produces the 150,000-reader print weekly Jewish Journal in Los Angeles – the largest Jewish print
weekly in the West – and the monthly glossy Tribe magazine (TribeJournal.com). Please support us by clicking here. |
994 A.2d 514 (2010)
192 Md. App. 354
FIRE AND POLICE EMPLOYEES' RETIREMENT SYSTEM OF The CITY OF BALTIMORE
v.
Amy MIDDLETON.
No. 02503, September Term, 2008.
Court of Special Appeals of Maryland.
May 6, 2010.
*515 Herbert Burgunder, Jr. and William R. Phelan, Jr. (George A. Nilson, City Solicitor on the brief), Baltimore, MD, for Appellant.
Duane A. Verderaime (O'Connor & Verderaime, P.C., on the brief), Baltimore, MD, for Appellee.
Panel: DEBORAH S. EYLER, MEREDITH and MATRICCIANI, JJ.
MATRICCIANI, J.
Appellant Fire and Police Employees' Retirement System of the City of Baltimore appeals the reversal, by the Circuit Court for Baltimore City, of its decision to grant non-line-of-duty disability retirement to appellee, Amy Middleton. The appellant presents one question for our review:
I. Did the court err in reversing the administrative decision to award non-line-of-duty disability because the administrative decision was supported by substantial evidence in the record and the hearing examiner correctly applied the law?
Finding substantial evidence to support the hearing examiner's decision, we shall reverse the judgment of the circuit court.
Facts
On July 4, 2006, the appellee, a Baltimore City police officer, was working crowd control at the Inner Harbor Park in Baltimore when she received a "Signal 13" call, indicating that a fellow officer needed immediate assistance. She described her response as follows:
[S]everal of us, pretty much everybody that was available by foot, took off. It. . . required me to go down a set of steps, jump over some walls at Harborplace. . . down the curb across the street, up a curb. And then there was just a wall of people, [who] . . . were pushing and shoving. People were reaching for me . . . somebody came across through me and I pushed their, *516 grab[bed] their arm and pushed them aside.
The appellee did not reach her destination before the call was cancelled, and she returned to her post. After returning to her post, the appellee started to feel pain in her lower back.
The appellee had severe pain the following morning and informed her sergeant that she needed to visit the clinic at Mercy Hospital. The doctor examined her and recommended that she be placed on light duty with "no suspect apprehension, no prisoner contact . . . [she should] be able to change positions at will if needed." The appellee remained on light duty and under the care of the doctors at Mercy until September 11, 2006, when she was released to full duty.
The appellee remained on full duty until March 15, 2007, when she reported to Mercy complaining of lower back pain that she had noticed two days earlier after she had been baking cookies at home. The pain spread to her right leg at some point thereafter. As a result of these symptoms the medical staff scheduled an MRI for March 19, 2007, and advised the appellee to use her medications and ice as needed.
On June 13, 2007, Dr. Mohammed H. Zamani conducted an independent medical evaluation and concluded that the appellee was capable of working without restrictions. The examination was performed on behalf of the City of Baltimore in connection with the appellee's worker's compensation claim arising from the incident on July 4, 2006. In the aftermath of the March 2007 hospital visit, the appellee was seen by three other doctors between August 2007 and March 2008, all of whom opined that her medical condition was chronic in nature and prevented her from performing the essential functions of a police officer.
On November 13, 2007, the appellee applied for line-of-duty disability. On April 28, 2008, a hearing examiner from the Fire and Police Employees' Retirement System held a hearing to determine whether the appellee was eligible for line-of-duty disability. On May 8, 2008, the hearing examiner issued a written decision in which the examiner denied line-of-duty disability retirement but awarded non-line-of-duty disability retirement to the appellee. The examiner found:
[T]he Claimant did prove by the preponderance of the evidence that she has suffered an illness or injury of such a nature that she is totally and permanently incapacitated for the further performance of the duties of her job classification as a police officer[.] However, the Claimant did not prove by the preponderance of the evidence that her disability was a result of an injury arising out of or in the course of her duties as a. . . [p]olice [o]fficer. The Claimant was completely discharged in September 2006 with full range of motion and no complaints as a result of the accident of July 4, 2006. Dr. Zamani does not indicate the Claimant's complaints are a result of the July 4, 2006 incident, nor does Dr. Halikman indicate the injury occurred as a line of duty incident. There was no treatment from September 2006 until March 2007. The Claimant specifically noted that she first noted pain to her lower back and numbness to her feet while baking cookies in the kitchen. Diagnostic tests were contradictory and therefore, inconclusive as to the cause of the Claimant's injury . . . [I]t is the opinion of the Hearing Examiner that the Claimant recovered from her injury of July 4, 2006 and therefore her complaints of March 2007 were not a result of a line of duty incident.
On November 20, 2008, the Baltimore City Circuit Court held a judicial review *517 hearing and reversed the decision of the hearing examiner. The court remanded the case with instructions to grant the appellee's application for line-of-duty retirement. The appellant timely noted this appeal.
I.
The appellant contends that the circuit court erred in reversing the administrative decision, arguing that the standard of review is extremely narrow for an administrative decision. Furthermore, the appellant argues that the decision of the hearing examiner was supported by substantial evidence and was not based on prejudicial legal error.
The appellee contends that the hearing examiner's decision was not supported by the record and was therefore erroneous. The appellee contends that, although she reached maximum medical improvement by September 11, 2006, she never recovered fully from the incident on July 4. The appellee also argues that the hearing examiner erroneously relied on the cookie-baking incident as an explanation for the recurrence of pain that ultimately forced her to retire.
Our role in reviewing an administrative decision is precisely the same as that of the circuit court. Bd. of Trs. for the Fire & Police Emples. Ret. Sys. v. Mitchell, 145 Md.App. 1, 8, 800 A.2d 803 (2002). We must presume that a decision made by an administrative body is prima facie correct. Marsheck v. Board of Trustees of the Fire & Police Employees Retirement Sys., 358 Md. 393, 402, 749 A.2d 774 (2000). We must limit our review of a final decision by an administrative agency to determine whether the agency had substantial evidence to support its decision and whether that decision is free from prejudicial legal error. Id.
In applying the substantial evidence test, we must decide whether a reasoning mind reasonably could have reached the factual conclusion the agency reached. Md. Aviation Admin. v. Noland, 386 Md. 556, 571, 873 A.2d 1145 (2005) (citations omitted). We will refrain from making our own findings of fact or substituting our judgment for that of the agency if the record contains substantial evidence supporting the agency's decision. Id. We have no power to substitute our assessment of credibility for that of the agency if there was evidence to support the findings of fact in the record before the agency. Terranova v. Board of Trustees, 81 Md. App. 1, 13, 566 A.2d 497 (1989). However, we will not uphold the agency's order unless it is sustainable on the agency's findings of fact and for the reasons stated by the agency. United Parcel Serv. v. People's Counsel, 336 Md. 569, 586, 650 A.2d 226 (1994).
In addition to the principles which normally guide our review of an administrative decision, the Baltimore City Code provides a statutory standard of review for decisions made by a hearing examiner from the retirement system. The Retirement Act, Baltimore, Maryland City Code (Baltimore City Code) art. 22, §§ 33(l)(12) (1983 Repl.Vol. & 1995 Supp.), states that a final determination of a hearing examiner is presumptively correct and may not be disturbed on appeal unless it is arbitrary, illegal, capricious or discriminatory.
Under § 34(e-1)(1) of the Baltimore City Code, line-of-duty disability benefits are available for any member whom the hearing examiner has determined to be totally and permanently incapacitated and thus unable to further perform the duties of his or her job classification. An applicant for line-of-duty disability benefits must also prove that the total and permanent incapacitation was the result of an *518 injury arising out of and in the course of the actual performance of duty. We have explained the difference between line-of-duty disability and non-line-of-duty disability as such:
If the injury arose out of or in the course of the actual performance of duty, then the claimant who is totally incapacitated is entitled to special disability benefits; if the injury was caused by any other means, then the claimant who is totally incapacitated is entitled to ordinary disability benefits.
Marsheck, 358 Md. at 410, 749 A.2d 774. The applicant has the burden of proving by a preponderance of the evidence that the disability was the result of an injury arising out of and in the course of the actual performance of duty. Baltimore City Code, Art. 22, § 33(l)(10).
The hearing examiner determined that the appellee was disabled, but not due to an on-the-job injury. The hearing examiner relied on the fact that neither Dr. Halikman nor Dr. Zamani indicated that the injury occurred as a result of a line-of-duty incident. With respect to Dr. Halikman's report, the hearing examiner's factual conclusions are simply not accurate. Dr. Halikman noted in his report that "Ms. Middleton described being injured in a line-of duty accident of July 4, 2006." He also noted that she described a gradual recurrence of her pain and that she "[gave] no history of any other line of duty injuries of any significance." Moreover, the appellant has conceded the fact that, according to his brief, "[t]here was medical opinion evidence from Dr. Halikman . . . to the effect that Off. Middleton's pain in 2007 was caused by the incident at the Inner Harbor on July 4, 2006."
The hearing examiner also found that Dr. Zamani, who examined the appellee on behalf of the city in June of 2007, did not indicate that her complaints were a result of the accident on July 4. Dr. Zamani described two separate incidents in his report: July 4, 2006 and May 22, 2007. The May 22 incident was described as such: "[The appellee] reports sitting in the office and when trying to get up felt pain and pop in the lower back with pain radiating to the upper back." The summary/discussion section of Dr. Zamani's report is as follows:
The examinee, according to history and review of the file was involved in the above dated accident. There was just some struggling and pushing and no fall and no radicular pain[.] She received rather extended care and treatment and was on light-duty for awhile[.] She was discharged from the care of PSI with maximum medical improvement on September 11, 2006 . . .
On May 22, 2007 when standing up she had some pain in the neck and back[.] The examinee currently has no complaints regarding the neck and the examination of the neck is quite normal[.]
As far as the back is concerned, by x-ray she does have some scoliosis and this is curvature of the spine and degenerative changes and loss of disc space, as well as pseudoarthroses[.]
The combination of all, and being somewhat overweight and multi pregnancies and cesarean section make the abdominal wall muscle somewhat weak, and is responsible for the current problem[.]
I feel the examinee has reached maximum medical improvement from July 4, 2006 and May 22, 2007 accidents[.] The current problem, scoliosis and congenital abnormality is preexisting and the main source of the back discomfort . . .
The examinee is capable of working and doing activity as usual without any restriction[.]
*519 A reasonable mind could conclude from this report, as the hearing examiner did, that congenital abnormalities caused the appellee's disability. Although a claimant is not required to show that the line-of-duty injury is hermetically sealed from any pre-existing condition or prior injury, Hersl v. Fire & Police Employees Retirement System, 188 Md.App. 249, 268, 981 A.2d 747 (2009), the hearing examiner has discretion to accept any explanation for a disability which is supported by substantial evidence. Indeed, Dr. Zamani does not indicate that the appellee is disabled, merely that she has continuing back pain and he mentions the appellee's pre-existing medical conditions as the main source of the back discomfort.
We dealt with a similar question of causation in Eberle v. Baltimore County, Maryland, 103 Md.App. 160, 652 A.2d 1175 (1995). In that case, Mr. Eberle was working as a meat-cutter and sustained a work-related injury to his right knee. Id. at 161, 652 A.2d 1175. He later obtained employment with the Baltimore County Government, and began his career with a clean bill of health and no work restrictions. Id. at 162, 652 A.2d 1175. While working for the County, Mr. Eberle sustained a serious knee injury which resulted in the filing of a workers' compensation claim and the payment of temporary total disability benefits. Id. Mr. Eberle returned to work for the county after the knee injury, although he was unable to work in his old job as a truck driver. Id. at 163, 652 A.2d 1175. Eventually, he found that he could not stay on his feet for any period of time and as a result he applied for accidental disability retirement benefits.[1]Id. The Board of Appeals found that Eberle suffered from degenerative arthritis in his knees and thus he did not meet the burden of proving the causal connection between his present disability and the two accidents he sustained at work. Id. at 165, 652 A.2d 1175.
In Eberle, we stated that in order for an injury to be accidental under the Baltimore County Code, it must result from some unusual strain or exertion or some unusual condition because the statutory definition of accidental did not include unexpected results not produced by accidental causes. Id. at 170, 652 A.2d 1175. Therefore, an unexpected result attributable to a predisposition to a pre-existing physical condition was not an accidental injury. Id. We held:
No medical report indicated that Eberle's disability was caused by his injuries at work. Neither did any report specifically conclude that Eberle would have suffered this disability in the absence of these injuries. Based on the medical reports that were riddled with references to a preexisting degenerative arthritis problem in addition to hypertension and a chronic overweight problem, it was not error for the Board of Appeals to conclude that Eberle's disability was not the natural and proximate result of the accidental injuries he suffered.
Id. at 174-75, 652 A.2d 1175.
The facts in this case are similar to those in Eberle. Both the appellee and *520 Eberle were diagnosed with pre-existing degenerative conditions. Both suffered accidents at work that exacerbated these problems and eventually rendered them unable to work in the same capacity as they had before the accident. As in Eberle, we are convinced that there was relevant and substantial evidence from which a reasonable mind could conclude that the appellee's disability was not the result of the injuries sustained in the course of duty.
To support her position, the appellee relies on Hersl v. Fire & Police Emples. Ret. Sys., 188 Md.App. 249, 981 A.2d 747 (2009), where we reversed the decision of the Circuit Court for Baltimore City and remanded for entry of an order awarding the appellant a line-of-duty pension. Hersl is distinguishable from the instant case. In Hersl, two doctors knew of a fireman's pre-existing heart condition and each independently concluded that he was permanently disabled from performing the duties of a firefighter based on line-of-duty injuries rather than the heart condition. Id. at 264, 981 A.2d 747. Furthermore, the hearing examiner's conclusion was predicated on an error of law whereby the examiner substituted his opinion for that of the doctor as to the permanency of the line-of-duty injuries suffered by the firefighter. Here, unlike in Hersl, the hearing examiner's conclusion is supported by substantial evidence in the form of Dr. Zamani's expert opinion, which constitutes evidence that the injuries sustained on July 4, 2006, were not permanent, despite other medical evidence to the contrary.
We dealt with a similar split in expert opinions in Terranova v. Board of Trustees, 81 Md.App. 1, 566 A.2d 497 (1989). The appellant cites to this case to support its argument that the opinion of one doctor, even if it differs from several other doctors, is enough to sustain an administrative decision. In Terranova, we held that "the fact that the opinions of three doctors go one way and the opinion of a fourth doctor another does not make the report of that fourth insubstantial." Id. at 11-12, 566 A.2d 497. The contrarian opinion of the fourth doctor was especially substantial in Terranova because the credibility of the respective physicians played an important role in the panel's decision. In preferring Dr. Zamani's report here, the hearing examiner shows that she found it more credible and that she viewed it as substantial.
We are not permitted to disturb the hearing examiner's assessment of credibility unless that assessment is arbitrary, illegal, capricious or discriminatory. The hearing examiner's decision is none of the above. The hearing examiner found that the appellee was completely discharged from the hospital's care in September 2006 with full range of motion and no complaints as a result of the accident. The doctor suggested that she continue with her home exercises and stretches after she was discharged in September 2006, but no further follow up was required. The appellee did not undergo formal medical treatment from September 2006 until March 2007. These factual findings were used to buttress the ultimate conclusion by the hearing examiner that the recurrence of pain in 2007 was not attributable to the incident on July 4. The inferences drawn by the hearing examiner are supported by a fair reading of the record. Therefore, we reverse the decision of the circuit court and remand the case to that court for the entry of judgment in favor of the appellant.
JUDGMENT OF THE CIRCUIT COURT FOR BALTIMORE CITY REVERSED. *521 CASE REMANDED TO THAT COURT FOR THE ENTRY OF JUDGMENT IN FAVOR OF APPELLANT.
COSTS TO BE PAID BY APPELLEE.
NOTES
[1] Baltimore County's statutory framework for accidental disability retirement benefits in Eberle was nearly identical to Baltimore City's. Id. at 169, 652 A.2d 1175. These benefits were available to a member who is "totally and permanently incapacitated for duty as the natural and proximate result of an accident occurring while in the actual performance of duty at some definite time and place, without willful negligence on his part[.]" Id. This language, in turn, is practically the same as that of § 34(e-1)(1) of Article 22 of the Baltimore City Code, which addresses line-of-duty disability benefits for fire and police employees.
|
We explore a new magical philosophy that focuses heavily on the consciousness alteration, shamanistic Gnosis, and building a community based on mutual compassion along with ancestor veneration. It focuses less on results and more on personal development paired with harmonious group dynamics. The goal of this occult system is to reintroduce the world to tribalism in a modern, accepting context, always group-inclusive but supreme in championing the growth of an individual.
Astral Guest – Threskiornis, co-author of Emergent Magick: Rebuilding Our Tribes Through Ritual and Meaning.
This is a partial show for nonmembers. For the second half of the interview, please become a member: http://thegodabovegod.com/members/subscription-levels/ or patron at Patreon: https://www.patreon.com/aeonbyte
More information on Threskiornis: http://emergentmagick.com/
Get Threskiornis’ book: https://amzn.to/2XbDNFI
Listen to this and all shows on YouTube or iTunes (available on all other podcast providers like Stitcher or Spotify).
Download these and all other shows: http://thegodabovegod.com/
Become a patron and keep this Red Pill Cafeteria open: https://www.patreon.com/aeonbyte |
Art, wine and wholesome, house-made food: It’s a can’t-miss recipe. But no one in Central Oregon had truly nailed it until the Clearwater Gallery in Sisters expanded its wine bar to a full-service restaurant last year.
The Open Door has tapped into a formula that perfectly fits the laid-back, artistic ambiance of the town of Sisters. One block south of U.S. Highway 20 (at West Hood Avenue and South Oak Street), the cafe’s tables are placed among gallery exhibits, beside stunning oil paintings, delicate watercolors and handmade craft items.
Additional seating is in the slightly more rustic wine bar, where relaxed Monday-night concerts draw a passionate crowd of local music lovers. Among them is Clearwater Gallery owner Julia Rickards, who displays landscapes and wildlife art by her husband, Dan, among the paintings in the gallery.
Like the food and atmosphere, service at The Open Door is heartfelt and genuine, if inconsistent. On our second visit, a lunchtime arrival, impeccable service greeted me and my dining companion. But previously, when we had dropped in for Monday dinner, there were delays and confusion. While I’m sure that our simultaneous arrival with numerous concert-goers was the main factor in the chaos, we also sensed a certain inexperience in the service staff.
Service snafus
Had I known in advance about the music, I would have made a reservation. Because I did not, we were relegated to a high stool at the wine bar. In short order, however, a server informed us that a reservation had been canceled, and she was able to reseat us at an isolated table near the gallery’s front door.
Shielded from other tables by room dividers hung with paintings, it would have been a romantic spot, had not two individuals stood beside the door discussing business for 15 minutes.
We were especially aware of that conversation because we were waiting for menus and water to be delivered. I finally rose to search for a server, and found one who assured me that we were “next on her list.”
We ordered a salad to be shared, followed by individual entrees. Much to our surprise, all courses arrived together. Our server expressed wonder that we would have wanted our salad to begin; apparently, we should have specified that desire when we ordered.
Delicious salad
Regardless, the “Wholesome Grain Salad” was wonderful. Kernels of barley were mixed with black beans and served over mixed greens, then tossed with sweet golden raisins, cherry tomatoes, sliced avocado and tender leaves of kale. The blend of textures and flavors was delicious, and it was enhanced by a vinaigrette dressing made with herbs and lemon juice.
The menu of entrees is limited, but it is supplemented with specials, including pasta and lasagna preparations that change nightly. My lasagna was good but not great. Served upon a bed of greens, it was layered with more ricotta cheese than ground beef. I would have liked extra tomato sauce to balance the ricotta. My companion had a Mediterranean flatbread, not unlike an unleavened Greek pizza. Baked with feta and mozzarella cheeses, topped with hummus, kale, artichoke hearts, kalamata olives and tomatoes, she found it tasty and not overly heavy.
That left room for her favorite food — chocolate. A flourless chocolate cake was, she said, “to die for.” I thought it was a nice brownie with whipped cream on top, but my sweet tooth is subdued. She said it was one of the best cakes she’s ever had.
Beets and ‘sammies’
Service was streamlined at our subsequent lunch. There were no interruptions in seating, order-taking or delivery of food. It made us think that our first experience might have been an aberration. We began with a salad of warm, coarsely chopped red beets, tossed with crumbled goat cheese and roasted almonds, served atop fresh arugula — its peppery flavor balanced with a dressing of brown-sugar vinaigrette. It was excellent.
My companion had a “Ham Sammie,” a baked croissant sandwich that paired smoked local ham with Swiss cheese. It got its unusual flavor — too sweet for me, but my friend thoroughly enjoyed it — from layers of honey Dijon mustard and chunky Granny Smith applesauce, made in-house.
I chose a blackboard special that coupled turkey with roasted bell peppers and Brie cheese on lightly grilled wheat bread. Similar to a regular menu item called the “Miss Crenshaw,” with turkey and avocado, tomato and red onion, it made a nice midday bite.
We also brought a sandwich home, an Italian panini. With salami and pepperoni pressed into a bruschetta, along with pepperoncini peppers and melted Havarti cheese, it was not unlike a mini pizza. But that was perfect for the teenager who awaited it.
Pisano’s Pizza, which closed its NorthWest Crossing store in June, has a new location: Pisano’s Woodfired Artisan Pizza opened Saturday in the former Subway space at Tumalo Junction. Owner-chef Ed Barbeau said his menu of thin-crust, New York-style pizzas is complemented with a half-dozen salads and an upscale wine and beer bar. 64670 Strickler Ave., Bend; 541-312-9349, www.facebook.com.
Having upgraded from a “burger deli,” the Big Belly Grill House in the Sunriver Business Park has added a selection of three-egg Benedicts, accompanied by pancakes or waffles, priced at no more than $12.50. A variety of meats — pulled pork, chicken and tri-tip steak, smoked in-house — are served throughout the day. Open 6 a.m. to 6 p.m. every day. 56815 Venture Lane, Sunriver; 541-382-3354, www.bigbellygrillhouse.com.
Connect with The Bulletin
Popular stories for Restaurants
The owner of the Mazza Bistro, a new Middle Eastern restaurant in downtown Bend, is not just another face in the crowd. In fact, Michel (Mike) Shehadeh might be one of the most prominent Arab-Americans on the West Coast. Born in Jordan and raised in a Palestinian Christian family in the West Bank, Shehadeh, 56, has been in the U. S. since enrolling at Cal…
... more
For more than 16 years, beginning in 1996, Bill and Lauren Kurzman have run a popular bakery on Bend's west side. A loyal clientele has earned the Village Baker a reputation as one of Central Oregon's best places to find freshly baked bread, specialty sandwiches and hearty soups. “Our focus has always been on the bread,” Bill Kurzman told me in 2008. “It is an…
... more |
Q:
Animating a view on top of another view
I want to animate a view on top of another view in my iphone app. I basically want my view to look like the apple keyboard except with my custom controls. When i click a button I want the new view to animate up, from the bottom of the screen, on top of part of the view.
How would I do this?
A:
Just add the view somewhere outside the visible frame of the super view (the view you are animating on top of), then change it's position using an animation block. It will slide up.
- (void)addView
{
// Obviously this won't compile with cut&paste... you have
// to supply the actual views.
UIView* myControlView = // assume this exists somewhere
UIView* myMainView = // assume this exists too
myControlView.frame = CGRectMake(0, 480, 320, 100); // use real numbers
[myMainView addSubview:myControlView];
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationCurve:UIViewAnimationCurveEaseIn];
myControlView.frame = CGRectMake(0, 380, 320, 100);
[UIView commitAnimations];
}
There are a number of other ways to do this as well. For instance, you can have a view set exactly where you want it and then translate it off screen to "hide" it. When you want it to come back, transform it again with an affine transform inside an animation block and it will slide back up. That way works a little better with views that are pre-laid out, like using Interface Builder and such where you don't necessarily know what the positions of the frame are, or they're subject to change in other code.
|
Monocyte-macrophage colony-stimulating factor is produced by a variety of cells, including macrophages, endothelial cells and fibroblasts (see, Ralph et al., "The Molecular and Biological Properties of the Human and Murine Members of the CSF-1 Family" in Molecular Basis of Lymphokine Action, Humana Press, Inc., (1987), which is incorporated herein by reference). M-CSF is composed of two "monomer" polypeptides, which form a biologically active dimeric M-CSF protein (hereinafter referred to as "M-CSF dimer"). M-CSF belongs to a group of biological agonists that promote the production of blood cells. Specifically, it acts as a growth and differentiation factor for bone marrow progenitor cells of the mononuclear phagocyte lineage. Further, M-CSF stimulates the proliferation and function of mature macrophages via specific receptors on responding cells. In clinical trials M-CSF has shown promise as a pharmaceutical agent in the correction of blood cell deficiencies arising as a side-effect of chemotherapy or radiation therapy for cancer and may be beneficial in treating fungal infections associated with bone marrow transplants. M-CSF may also play significant biological roles in pregnancy, uveitis, and atherosclerosis. Development of M-CSF agonists or antagonists may prove to be of value in modifying the biological events involved in these conditions.
M-CSF exists in at least three mature forms: short (M-CSF.alpha.), intermediate (M-CSF-.gamma.), and long (M-CSF.beta.). Mature M-CSF is defined as including polypeptide sequences contained within secreted M-CSF following amino terminus processing to remove leader sequences and carboxyl terminus processing to remove domains including a putative transmembrane region. The variations in the three mature forms are due to alternative mRNA splicing (see, Cerretti et al. Molecular Immunology, 25:761 (1988)). The three forms of M-CSF are translated from different mRNA precursors, which encode polypeptide monomers of 256 to 554 amino acids, having a 32 amino acid signal sequence at the amino terminal and a putative transmembrane region of approximately 23 amino acids near the carboxyl terminal. The precursor peptides are subsequently processed by amino terminal and carboxyl terminal proteolytic cleavages to release mature M-CSF. Residues 1-149 of all three mature forms of M-CSF are identical and are believed to contain sequences essential for biological activity of M-CSF. In vivo M-CSF monomers are dimerized via disulfide-linkage and are glycosylated. Some, if not all, forms of M-CSF can be recovered in membrane-associated form. Such membrane-bound M-CSF may be cleaved to release secreted M-CSF. Membrane associated M-CSF is believed to have biological activity similar to M-CSF, but may have other activities including cell-cell association or activation.
Polypeptides, including the M-CSFs, have a three-dimensional structure determined by the primary amino acid sequence and the environment surrounding the polypeptide. This three-dimensional structure establishes the polypeptide's activity, stability, binding affinity, binding specificity, and other biochemical attributes. Thus, a knowledge of a protein's three-dimensional structure can provide much guidance in designing agents that mimic, inhibit, or improve its biological activity in soluble or membrane bound forms.
The three-dimensional structure of a polypeptide may be determined in a number of ways. Many of the most precise methods employ X-ray crystallography (for a general review, see, Van Holde, Physical Biochemistry, Prentice-Hall, N.J. pp. 221-239, (1971), which is incorporated herein by reference). This technique relies on the ability of crystalline lattices to diffract X-rays or other forms of radiation. Diffraction experiments suitable for determining the three-dimensional structure of macromolecules typically require high-quality crystals. Unfortunately, such crystals have been unavailable for M-CSF as well as many other proteins of interest. Thus, high-quality, diffracting crystals of M-CSF would assist the determination of its three-dimensional structure.
Various methods for preparing crystalline proteins and polypeptides are known in the art (see, for example, McPherson, et al. "Preparation and Analysis of Protein Crystals", A. McPherson, Robert E. Krieger Publishing Company, Malabar, Fla. (1989); Weber, Advances in Protein Chemistry 41:1-36 (1991); U.S. Pat. No. 4,672,108; and U.S. Pat. No. 4,833,233; all of which are incorporated herein by reference for all purposes). Although there are multiple approaches to crystallizing polypeptides, no single set of conditions provides a reasonable expectation of success, especially when the crystals must be suitable for X-ray diffraction studies. Thus, in spite of significant research, many proteins remain uncrystallized.
In addition to providing structural information, crystalline polypeptides provide other advantages. For example, the crystallization process itself further purifies the polypeptide, and satisfies one of the classical criteria for homogeneity. In fact, crystallization frequently provides unparalleled purification quality, removing impurities that are not removed by other purification methods such as HPLC, dialysis, conventional column chromatography, etc. Moreover, crystalline polypeptides are often stable at ambient temperatures and free of protease contamination and other degradation associated with solution storage. Crystalline polypeptides may also be useful as pharmaceutical preparations. Finally, crystallization techniques in general are largely free of problems such as denaturation associated with other stabilization methods (e.g. lyophilization). Thus, there exists a significant need for preparing M-CSF compositions in crystalline form and determining their three-dimensional structure. The present invention fulfills this and other needs. Once crystallization has been accomplished, crystallographic data provides useful structural information which may assist the design of peptides that may serve as agonists or antagonists. In addition, the crystal structure provides information useful to map, the receptor binding domain which could then be mimicked by a small non-peptide molecule which may serve as an antagonist or agonist. |
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>JQVMap - USA Map</title>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<link href="../dist/jqvmap.css" media="screen" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="http://code.jquery.com/jquery-1.11.3.min.js"></script>
<script type="text/javascript" src="../dist/jquery.vmap.js"></script>
<script type="text/javascript" src="../dist/maps/jquery.vmap.usa.js" charset="utf-8"></script>
<script>
var map;
jQuery(document).ready(function () {
// Store currentRegion
var currentRegion = 'fl';
// List of Regions we'll let clicks through for
var enabledRegions = ['mo', 'fl', 'or'];
map = jQuery('#vmap').vectorMap({
map: 'usa_en',
enableZoom: true,
showTooltip: true,
selectedColor: '#333333',
selectedRegions: ['fl'],
hoverColor: null,
colors: {
mo: '#C9DFAF',
fl: '#C9DFAF',
or: '#C9DFAF'
},
onRegionClick: function(event, code, region){
// Check if this is an Enabled Region, and not the current selected on
if(enabledRegions.indexOf(code) === -1 || currentRegion === code){
// Not an Enabled Region
event.preventDefault();
} else {
// Enabled Region. Update Newly Selected Region.
currentRegion = code;
}
},
onRegionSelect: function(event, code, region){
console.log(map.selectedRegions);
},
onLabelShow: function(event, label, code){
if(enabledRegions.indexOf(code) === -1){
event.preventDefault();
}
}
});
});
</script>
</head>
<body>
<div id="vmap" style="width: 600px; height: 400px;"></div>
</body>
</html>
|
/*
* Author: Anssi Piirainen, <[email protected]>
*
* Copyright (c) 2009-2011 Flowplayer Oy
*
* This file is part of Flowplayer.
*
* Flowplayer is licensed under the GPL v3 license with an
* Additional Term, see http://flowplayer.org/license_gpl.html
*/
package org.flowplayer.controller {
import flash.net.NetStream;
public interface TimeProvider {
function getTime(netStream:NetStream):Number;
}
} |
Challenges in defining predictive markers for response to endocrine therapy in breast cancer.
Endocrine therapy is a major treatment modality for hormone-dependent breast cancer. It has a relatively low morbidity, and there is evidence that antihormonal treatments have had a significant effect in reducing mortality for breast cancer. Despite this, resistance to endocrine therapy, either primary or acquired during treatment, occurs in the majority of patients, and is a major obstacle to optimal clinical management. There is therefore an urgent need to identify, on an individual basis, those tumors that are most likely to respond to endocrine therapy (so sparing patients with resistant tumors the needless side effects of ineffective therapy), and the mechanisms of resistance in tumors that are nonresponsive to treatment (so these can be bypassed). These needs are the focus of this review, which discusses the particular issues encountered when investigating the potential of multigene expression signatures as predictive factors for response to aromatase inhibitors, which have recently become front-line endocrine therapies for postmenopausal patients with breast cancer. |
/*
Information about performed build.
*/
module ModTime: {
type t;
let v: float => t;
let equal: (t, t) => bool;
let pp: Fmt.t(t);
};
type t = {
idInfo: BuildId.Repr.t,
timeSpent: float,
sourceModTime: option(ModTime.t),
};
let of_yojson: EsyLib.Json.decoder(t);
let to_yojson: EsyLib.Json.encoder(t);
let toFile: (EsyLib.Path.t, t) => RunAsync.t(unit);
let ofFile: EsyLib.Path.t => RunAsync.t(option(t));
|
Q:
MySQL Parent Sub-level Table in Single DB Table as Result to Chain / Parent Basis for Dropdown Display
I have a database table, where I have single table to manage all root level and their related sub categories.
Below is the table and data structure of the same, My SQL table:
id label pid
1 Parent 1 0
2 Parent 2 0
2 Parent 2 0
3 Child 1.1 1
4 Child 1.2 2
5 Child 2.1 2
6 Child 1.1.1 3
7 Child 1.1.2 3
Now, I want a single query, that gives me result, where first records are my root level category labels where pid is 0 and then below that its subcategory, whose pid is 1 and pid as label, instead of id to display in my dropdown on add page
Hope you guys understand what I am trying to say here !
Looking forward for your quick response.
Edit from comments
Tried so far, but without success, as it produces a single level of results.
SELECT p.Name, s.Name FROM Categories s LEFT JOIN Categories p ON s.mainCat = p.ID ORDER BY p.Name, s.Name;
A:
The short version was that you seem to be trying to do a hierarchical tree, and whilst I have tried various methods out on this the one desribed on the following link seems the most efficient in the long run even though it takes a bit more work to set up in the first place.
http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/
For more details I'd advise reading the above article, and possibly this question which covers a similar problem.
Create a multiselect drop down with parent-child menu items
(Direct link to the answer I gave there, says pretty much the same but with a lot more explanation/rationale)
https://stackoverflow.com/a/9377018/1213554
At present your key problem is that there is no clear linking field that tells you which child links to which parent. Ideally this should be done either using the tree method detailed above, or at the most basic by having a parentID field which can be either 0 (root element) or equating to the PRIMARY KEY id column. This in turn allows you to more easily determine what each childs parent is.
|
HOLLYWOOD director James Cameron is preparing to dive to the deepest point of the oceans as part of his research for a sequel to Avatar, his 3D epic.
He has commissioned Australian engineers to build a deep sea submersible which can reach the bottom of the Mariana Trench - 10.9km (36,000ft) down in the western Pacific - after deciding to set the film in the turbulent waters of Pandora, an alien moon.
The vessel will be fitted with 3D cameras designed by Cameron so that he can take unprecedented footage of such depths and, if he wants to, fill it with digitally created monsters for Avatar 2.
The muddy, rocky Mariana Trench, which could swallow Mount Everest, has been visited by man only once.
In May 1960, a submersible called the Trieste took nearly five hours to descend to its floor. Its passengers, Jacques Piccard, a Swiss scientist, and Don Walsh, a US navy lieutenant, were able to spend 20 minutes at the bottom of the world.
In the cold and darkness, eating chocolate bars, they were joined by flounder, sole and shrimp, proving that some vertebrate life can exist at such extraordinary depths.
Although remote-controlled vessels have gone back to the Challenger Deep, a valley at the bottom of the trench, no humans have been so deep again. However, Cameron, who reportedly earned $350m from Avatar, has the money and passion to return.
His obsession with the waters that cover two-thirds of the world's surface has been manifested not only in his blockbuster Titanic and a spin-off documentary, but also in his 1989 film The Abyss.
Last month, Cameron spent his 56th birthday in a Russian deep sea submersible called the Mir-1, descending more than 5,000ft (1.5km) into Lake Baikal in Siberia, the deepest freshwater lake in the world.
Cameron told Russian journalists that he had come to the Siberian lake to draw attention to its pollution problems. He says his descent into the Mariana Trench would be a similar environmental mission.
"We are building a vehicle to do the dive," he said. "It's about half-completed in Australia." He hopes to start preparing for the dive later this year.
Australian scientists believed to be working for Cameron have visited the San Francisco headquarters of Hawkes Ocean Technologies, which has been building a submersible capable of settling at the bottom of the trench.
Cameron's new vessel is expected to be a two-seater, finned cylinder fitted with the latest 3D cameras and a heating system largely missing from the Trieste.
Some of his footage from the depths may end up in Avatar 2 - which is not expected to reach cinemas before 2014 - or possibly in two other deep-sea adventures that the director is considering turning into movies. |
Gold, silver, pgms, mining and geopolitical comment and news
Mark Bristow
As noted earlier this week,Randgold Resources CEO, Mark Bristow, has been on a tour of the company’s operations – all in West and Central Africa – ahead of the release of the company’s Q1 2018 results announcement in just under a week’s time. The latest visit was to its Loulo-Gounkoto complex in Mali, which in combination is currently the largest gold producer in Africa, although this position may soon be usurped by the Randgold-operated, and 45%-owned, Kibali gold mine in the DRC.
In an announcement Randgold confirmed that it continued to see Mali as having potential for further growth and is continuing to invest there – Loulo-Gounkoto is already the single biggest foreign investment in the country. The compzny says Q1 output will fall back from Q4 2017 levels due to scheduling production from lower grade areas – although we will have to wait for the quarterly announcement to find out by how much.
Randgold (LSE: RRS and NASDAQ: GOLD) has arguably been the No.1 global gold growth stock over the past several years, despite all its operations being in what the investment community sees as difficult investment environments. It has been particularly adept in continuing to grow its gold output while maintaining mostly good relationships with its host governments, which is presumably why the much larger Anglogold Ashanti, which also owns 45% of Kibali, ceded construction and operational management of the DRC’s largest gold mine to Randgold.
A lightly edited version of Randgold’s statement on its Malian operations is set out below:
Randgold’s Loulo-Gounkoto gold mining complex in Mali, already one of the largest of its kind in the world, is still expanding, with the Gounkoto super pit and the new Baboto satellite pit joining its Yalea and Gara underground mines.
Speaking at a site visit for local media, chief executive Mark Bristow said the complex’s all-Malian management team, which steered it to a record performance in 2017, had made a good start to this year, although production was expected to be lower than the previous quarter on the back of forecast lower grades, reflecting the sequencing of mining lower grade blocks at both Loulo and Gounkoto. Although slightly delayed, mining of the Baboto satellite pit was now well on track to support the complex with softer oxide ore feed.
“We expect grades to pick up and production to increase through the rest of the year to deliver our production guidance of 690,000 ounces for 2018,” said Tahirou Ballo, the GM of the complex. Mr Ballo noted that production from the underground mines continued to show a steady improvement since Loulo took over the mining from contractors in 2016.
Chiaka Berthe, the West African GM of operations, said the Loulo-Gounkoto complex represented the largest foreign investment to date in the Malian economy. After all these years it was still investing in new mining projects like the Gounkoto pushback and the new Baboto satellite pit he said. The country is rich in other gold opportunities, and Randgold continues to search for extensions to the known orebodies as well as new discoveries in its extensive Malian landholdings.
On its sustainable development policy in the areas around its mining operations, Randgold also continues to invest substantially in its host communities. Some 5,000 students are enrolled at 17 schools built by the company, and last year 52 of them were awarded bursaries for further study. Randgold is also advancing the development of commercially viable agribusiness enterprises, to mitigate the socio-economic impact of the complex’s eventual closure. The project already includes five incubation farms and an agricultural college with 70 students.
“Randgold Resources’ (LSE: RRS, NASDAQ: GOLD) operations are strongly placed to generate robust cash flows even at gold prices below current levels and to continue delivering value to all stakeholders”, so says chief executive Mark Bristow in a release on the company’s 2015 annual report published today.
Randgold has arguably been the biggest gold mining success story of the past two decades (It was only established back in 1995 and was first listed in 1997). It has increased gold production from tiny beginnings to become the world’s 15th largest gold producer (according to consultancy Metals Focus) with an attributable output now of comfortably over 1 million ounces a year. It now numbers Africa’s two biggest gold mines – Kibali in the DRC and the Loulo-Gounkoto complex in Mali, both of which it built from scratch – among its operations, All this has been accomplished in a part of the world which some of its major gold mining peers feel is too risky in which to manage significant operations.
At Kibali in particular it succeeded in building a huge gold mine in one of the most remote parts of Africa, close to the DRC’s border with South Sudan, hundreds of miles from both Africa’s east and west coasts and with virtually no local infrastructure – a major logistical exercise in its own right. And yet it succeeded in bringing the mine on stream ahead of schedule. It is notable here that although it is in equal partnership with the world’s third largest gold miner, AngloGold Ashanti (both have 45% stakes), the latter ceded construction and operational control to its much smaller partner, presumably because of Randgold’s unparalleled record of building and operating mines in West Africa and its skills in navigating the often troubled political waters of the region.
What the gold mining industry needs, says Bristow, is to make new discoveries, as even a significant rise in the gold price and an injection of fresh capital will at best enable it to clear its debt, but will provide little scope for adding any value or reversing the production decline. Through its consistent investment in exploration and development Randgold, in contrast, was projecting sustained growth from a solid foundation.
“Our mines have been modelled to generate cash flows at gold prices well below the $1,000/oz level. Our positive production and cost profiles extend to a 10-year horizon, we have had no impairments or write-downs, and have substantial cash resources. Our exploration teams are not only replacing the ounces we deplete but are making significant progress in the hunt for our next big discovery. In fact, we are in a unique position to continue delivering value to all our stakeholders,” he says.
Randgold set a new annual production record of more than 1.2 million ounces in 2015, up 6% on the previous year, while reducing group total cash cost per ounce by 3% to $679. Strong cash flows from the operations boosted cash on hand by 158% to $213.4 million. However profit for the year was $212.8 million against the previous year’s $271.1 million, reflecting the decline in the gold price. The board has nevertheless still recommended a 10% increase in the annual dividend.
Also in the annual report, chairman Christopher Coleman reports that even in the current challenging market, Randgold is not reducing its investment in corporate and social programmes, in line with its philosophy that sustainability is central to all its activities.
“Randgold’s social initiatives extend far beyond the life of its mines. At all its operations, it is developing ambitious legacy projects designed to provide a permanent source of employment and economic opportunity to these communities. Based on agriculture, the primary building block of any developing economy, these range from training and funding would-be commercial farmers to a wide spectrum of agribusiness initiatives, many of which are already supplying local markets. The company is equally mindful of the health and safety of its employees, and it strives constantly to improve an already exemplary record in this regard,” he says.
Contrary to the position of many of its peers, Randgold, as noted above, also reaffirmed its intention to continue to pay a progressive ordinary dividend that will increase or at least be maintained annually. The board thus proposed the 10% increase in the 2015 dividend to $0.66 per share for approval at its annual general meeting on 3 May 2016. This is almost unique among major gold miners, most of which have been having to take big impairments in their balance sheets, have been having to cut debt and have been sharply reducing their dividend payments. Randgold has taken no impairments, has no debt and is raising dividends year on year.
Commenting on this statement, financial director Graham Shuttleworth said that at a time when the gold mining industry was focused on survival, Randgold was able to maintain its dividend policy on the back of last year’s strong performance. He confirmed that the company still intended to build its net cash position to approximately $500 million to provide financing flexibility for future new mine developments and other growth opportunities.
Randgold Resources’ world class Tongon gold mine in Cote d’Ivoire has not been without its problems, but even so it has now paid off its shareholders’ loans of $448 million, used to partially fund its capital investment of $580 million, thereby moving it into a dividend-paying position.
Speaking at the mine’s quarterly briefing for local media, Randgold CEO, Mark Bristow described this as a significant achievement, particularly in the context of a global gold mining industry currently characterised by capital write-downs and impairments.
Although Tongon is only Randgold’s third largest mine – after Kibali in the DRC, and Loulo-Gounkoto in Mali – and is still operating below full capacity, it is a very significant gold mine by any standards, and is targeting gold output of 260,000 ounces, at a total cash cost of $820 per ounce, in the current year.
“Tongon has already paid close to $90 million to the Ivorian state in the form of royalties and taxes and the country will now benefit even more from the dividends the government will receive through its 10% carried interest in the mine as well as the increased revenue when Tongon starts paying full corporate tax at the end of this year,” Bristow said. He noted that since its commissioning five years ago, Tongon had also contributed more than $600 million to the Ivorian economy in the form of payments to local suppliers and had invested almost $6 million in community upliftment projects.
Bristow has also frequently described Cote d’Ivoire as being a highly prospective country in which to explore for new gold mining operations and has praised the government for its approach to foreign investment in the mining sector which it considers very favourable for attracting new business.
“Ongoing exploration around Tongon has increased its reserves after depletion by 18% since 2009, extending its remaining life by another year. We also continue to look for more multi-million ounce deposits elsewhere in this highly prospective country, and we are about to launch our biggest-ever exploration drive in Côte d’Ivoire. This will include a fresh look at the Nielle permit, which hosts Tongon, and a geophysical survey, followed by a diamond drilling programme, across our holdings in the north of the country,” he said.
He also cited Tongon as a particularly good example of the success of Randgold’s policy of recruiting, training and empowering nationals of its host countries to run world-class mines in Africa. The mine’s workforce is 97% Ivorian and only two members of its management team are not Ivorians.
Bristow also noted that Tongon has won the President’s Award as the best mine in Côte d’Ivoire for two successive years.
The ongoing search for additional reserve ounces at Kibali will secure its future as a long-life mine and one of Africa’s largest gold producers, Randgold Resources chief executive Mark Bristow said in a speech in Kinshasa, DRC. Randgold develops and operates the mine and has a 45% stake, which it owns in partnership with AngloGold Ashanti (also 45% owners) and the Congolese parastatal SOKIMO which holds the 10% balance.
In 2014, its first full year of operation, Kibali produced 526,627 ounces of gold at a total cash cost of $573/oz and Bristow told a media briefing here that production and cost for the first quarter of 2015 were likely to be within guidance.
“When you’re producing gold at the rate of around 600,000 ounces per year, the need to replace the reserves that are consumed is of critical importance,” he said. “We believe Kibali’s KZ structure hosts significant additional resources, and our continuing exploration is confirming this potential. A number of targets have been identified and the Kalimva-Ikamva and Kanga sud targets have been prioritised for in-depth investigation.” One suspects that the promising geology around the mine should host sufficient gold resources to keep it in operation well beyond its initial 18 year mine life.
Kibali is still a work in progress, with its third open pit now operational and the development of its underground mine ahead of schedule. Ore from its stopes is already being delivered to the plant but the underground mine is only expected to be in full production by 2018. The first of the mine’s three hydropower plants was commissioned last year and work on the second is well underway. The metallurgical plant is operating at its design capacity and construction of the paste plant is nearing completion. Despite the high level of production and development activity – some 5,000 people are currently employed on site – Kibali is maintaining a good safety record, with the lost-time injury rate reduced by 16% last year.
Kibali represents an initial investment of more than US$2 billion and at a gold price of $1,200/oz and its current mine plan is only expected to repay its funding after 2024. Thanks to its strong cash flow, however, it has already been able to repay the first tranche of its debt in March. The whole project has been a remarkable success to date, particularly given its location – almost right in the geographical centre of the African continent, close to the South Sudan and Ugnadan borders which necessitated the bulk of the supplies and equipment having to be delivered from the African east coast rather than through the DRC itself.
Bristow said Kibali was continuing to invest in the development of the regional economy by using local contractors and suppliers wherever possible. A prefeasibility study on a palm oil project, designed to provide a sustainable source of post-mining economic activity for the region, has been completed and work on a bankable feasibility study has started.
On the issue of the DRC’s proposed new mining code, Bristow said he welcomed Prime Minister Augustin Matata Ponyo’s recent statement that the government was ready to re-engage with the mining industry with the intention to review the draft submitted to parliament and was open to further discussions with the sector.
“We were surprised and disappointed when the ministry of mines presented a draft code to parliament without taking the industry’s comments on board and which departed radically from the common ground we thought had been established. As the DRC Chamber of Mines warned at the time, enactment of the code in this investment-hostile form will have a catastrophic effect not only on the mining sector but on the Congolese economy generally. It was therefore very heartening to learn from the prime minister that the government has recommitted itself to negotiation,” he said. |
1. Field of the Invention
The present invention relates to a steel sheet for use in applications requiring electric conduction, such as grounding, supply of electricity or electric welding, which is endowed with both an electrical conductivity as one feature of a steel sheet and an excellent corrosion resistance, and specifically to a precoated steel sheet which is used as the casings of electric or electronic appliances and office automation appliances, and which is free of blocking in piling (shearing of plates to the required dimensions) or coiling and has a corrosion resistance, an electrical conductivity and an electromagnetic wave shielding effect.
2. Description of the Related Art
Steel sheets have many features and are used in a wide range of applications. Among such many features, electrical conductivity is one of the important features. Thus, steel sheets have many fields of utilization in grounding, supply of electricity, electric welding, etc. However, they always involve a problem of rusting.
The use of a steel sheet without any treatment for the purpose of securing electrical conductivity does not meet the requirement of corrosion resistance. The method of using a conductive coating (see, for example, Japanese Patent Laid-Open No. 189,843/1982) involves insufficiency of electric conductivity and high cost due to an expensive conductive coating. In a method of using other metal sheets such as an aluminum sheet instead of a steel sheet, the electrical conductivity of such a metal sheet is considerably poor as compared with that of a steel sheet, and other properties such as strength are also inferior.
There has recently arisen a problem that electromagnetic waves generated in an electric or electronic appliance or an office automation appliance bring about malfunction or noise generation of other electric or electronic appliance or office automation appliance (this phenomenon is called electromagnetic interference, hereinafter referred to briefly as EMI). This problem can be solved if the appliance is wholly covered with a conductive substance to ground the same. However, plastics as insulating substances and precoated steel sheets having insulating coatings formed on both sides thereof have recently been increasingly used particularly in casings of appliances, so that there has been an increasing demand for a countermeasure against the problem of EMI.
As for plastics, there has been proposed various methods as the EMI countermeasures including spray coating of a metal, vacuum evaporation and deposition of a metal, coating of the surface of a plastic with a paint containing a conductive pigment (see, for example, Japanese Patent Laid-Open No. 207,938/1984), and incorporation of a conductive substance into a plastic (see, for example, Japanese Patent Laid-Open No. 102,953/1984). However, any of these methods has disadvantages that the electrical conductivity is insufficient, a technical difficulty is involved, and the cost is increased.
As for precoated steel sheets, there have been proposed no particular EMI countermeasures as yet. Thus, a countermeasure is taken by leaving one side of a steel sheet untreated or subjecting the same to only chemical treatment or conversion coating, or by shaving off part of a coating from a precoated steel sheet. However, these methods involve a problem of a decrease in corrosion resistance in the exposed portion of the steel sheet. Particularly in the method of leaving one side of a steel sheet untreated or subjecting the same to only chemical treatment, there occurs, blocking, that is, injury of a decorative side (coated side) of a steel sheet by an untreated or chemically treated side thereof in piling or coiling. The method of shaving off part of a coating has a defect of an increase in the number of steps of manufacturing.
As the method of imparting an electrical conductivity to a precoated steel sheet, there has been proposed one in which a steel sheet is coated with a coating containing a metallic powder incorporated thereinto for imparting an electrical conductivity as described above. Also in this case, blocking is caused by the protruded portions of metal particles incorporated into the coating in piling or coiling just like the method of leaving one side of a steel sheet untreated or subjecting the same to only chemical treatment. Further, the electrical conductivity is insufficient for the EMI countermeasure. |
Safety of Edhazardia aedis (Microspora: Amblyosporidae) for nontarget aquatic organisms.
The susceptibility of common nontarget aquatic organisms to the microsporidium Edhazardia aedis was investigated in the laboratory. Eight predacious species along with 9 scavengers and filter feeders were tested. The nontarget organisms were not susceptible to infection by E. aedis and there was no appreciable mortality. To measure the relative safety of E. aedis to nontarget organisms, a simple mathematical expression was employed where risk is defined as the product of the probability of exposure and the result of exposure (infection) expressed as P(e)P(i). In these laboratory tests, the probability of exposure was fixed at 1 (maximum challenge) and the probability of infection was determined to be 0. Therefore, the risk associated with release of E. aedis into the environment is considered to be negligible under these conditions. The true risk for nontarget organisms to E. aedis can only be determined by careful evaluation of controlled field studies in the natural habitat of the target host. |
Short-term fasting and lipolytic activity in rat adipocytes.
The aim of this experiment was to study the influence of 18-hour food deprivation on basal and stimulated lipolysis in adipocytes obtained from young male Wistar rats. Fat cells from fed and fasted rats were isolated from the epididymal adipose tissue by collagenase digestion. Adipocytes were incubated in Krebs-Ringer buffer (pH 7.4, 37 degrees C) without agents affecting lipolysis and with different lipolytic stimulators (epinephrine, forskolin, dibutyryl-cAMP, theophylline, DPCPX, amrinone) or inhibitors (PIA, H-89, insulin). After 60 min of incubation, glycerol and, in some cases, also fatty acids released from adipocytes to the incubation medium were determined. Basal lipolysis was substantially potentiated in cells of fasted rats in comparison to adipocytes isolated from fed animals. The inhibition of protein kinase A activity by H-89 partially suppressed lipolysis in both groups of adipocytes, but did not eliminate this difference. The agonist of adenosine A (1) receptor also did not suppress fasting-enhanced basal lipolysis. The epinephrine-induced triglyceride breakdown was also enhanced by fasting. Similarly, the direct activation of adenylyl cyclase by forskolin or protein kinase A by dibutyryl-cAMP resulted in a higher lipolytic response in cells derived from fasted animals. These results indicate that the fasting-induced rise in lipolysis results predominantly from changes in the lipolytic cascade downstream from protein kinase A. The antagonism of the adenosine A (1) receptor and the inhibition of cAMP phosphodiesterase also induced lipolysis, which was potentiated by food deprivation. Moreover, the rise in basal and epinephrine-stimulated lipolysis in adipocytes of fasted rats was shown to be associated with a diminished non-esterified fatty acids/glycerol molar ratio. This effect was presumably due to increased re-esterification of triglyceride-derived fatty acids in cells of fasted rats. Comparing fed and fasted rats for the antilipolytic effect of insulin in adipocytes revealed that short-term food deprivation resulted in a substantial deterioration of the ability of insulin to suppress epinephrine-induced lipolysis. |
# coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class AgreementContent(Model):
"""The integration account agreement content.
:param a_s2: The AS2 agreement content.
:type a_s2: ~azure.mgmt.logic.models.AS2AgreementContent
:param x12: The X12 agreement content.
:type x12: ~azure.mgmt.logic.models.X12AgreementContent
:param edifact: The EDIFACT agreement content.
:type edifact: ~azure.mgmt.logic.models.EdifactAgreementContent
"""
_attribute_map = {
'a_s2': {'key': 'aS2', 'type': 'AS2AgreementContent'},
'x12': {'key': 'x12', 'type': 'X12AgreementContent'},
'edifact': {'key': 'edifact', 'type': 'EdifactAgreementContent'},
}
def __init__(self, **kwargs):
super(AgreementContent, self).__init__(**kwargs)
self.a_s2 = kwargs.get('a_s2', None)
self.x12 = kwargs.get('x12', None)
self.edifact = kwargs.get('edifact', None)
|
Q:
How to show data for Launches, Environment, Trend and Executors in Allure Report
I am using allure 2.1.1. Allure Report is only showing executed test in Overview tab. How to show data for Launches, Environment, Trend and Executors.
A:
I am looking for this answers too.
I found only how to show data for Environments allure wiki
Also I understood how to setup Trend graphics. After report generation, you should copy ...\allure-report\history folder to ...\allure-results folder. Then when you make report from allure-results it will generate report with history trend graphics.
|
Calciomercato Milan, Paredes è l'obiettivo a centrocampo
Il Milan vuole rinforzare il proprio centrocampo: uno dei primi obiettivi di Leonardo è Paredes, anche se i costi potrebbero essere eccessivi.
Leonardo e Maldini sono approdati al (per la seconda volta) troppo tardi questa estate. Giusto il tempo di architettare la trattativa con la per Higuain e altre due operazioni, ma il lavoro che vogliono svolgere in casa rossonera è molto più grande.
Con DAZN vedi oltre 100 match di Serie A in esclusiva e tutta la Serie B IN STREAMING, LIVE E ON DEMAND
Motivo per il quale sono già a lavoro per le prossime sessioni di mercato di gennaio e di giugno. Lo dimostra l'accordo già trovato per il fantasista del Flamengo, Paquetá. Ma non si fermeranno qui.
Secondo quanto riportato dal 'Corriere dello Sport', sul taccuino di Leonardo c'è il nome di Leando Paredes. L'ex giocatore della , attualmente allo di San Pietroburgo, avrebbe già dato il suo placet per un ritorno in .
Paredes ha un costo elevato, e lo Zenit non vorrebbe perderlo dopo l’acquisizione a luglio 2017 per circa 25 milioni di euro. Si ragiona quindi su cifre più alte, che il Milan non potrà spendere così facilmente, soprattutto se non vorrà far 'innervosire' la UEFA.
Più o meno nella stessa zona di campo, c'è un altro profilo che fa gola ai rossoneri. Stiamo parlando di Aaron Ramsey dell’ , che ha annunciato in questi giorni di non voler rinnovare più il suo contratto con l'Arsenal, in scadenza al termine di questa stagione. Un profilo di alto livello, a costo zero. Sarebbe una chance da non farsi sfuggire. |
“I am not a fan of governing by tweet. I’m just not, and I know that we’re in a different world now and we communicate differently, but in my view there is a seriousness and a professionalism that comes with the executive," she told the Ketchikan Daily News in an interview published Saturday.
Murkowski added while she and Trump disagree on social media, and the president "is able to capture people's attention," some of the "inflammatory rhetoric and the name calling I don’t think is constructive.”
ADVERTISEMENT
Trump has stepped up his war of words with the isolated Asian country, including mocking leader Kim Jong Un as "Little Rocket Man," and warning that the regime “won’t be around much longer” if it continues threats against the U.S.
She added to the Daily News that she is "worried" that the Republican Party is becoming too exclusive.
“Well, I am (worried) in the sense that, as a party, I believe we have always been kind of broader and more inclusive of differing views across the spectrum,” Murkowski said. "We seem to be more fractured within our party now than in the big-tent Ronald Reagan days." |
Although Coinbase has recently become a controversial company, especially as it began to add crypto assets left and right, the company has long had an unrelenting drive for innovation. Since setting up shop in 2012, the San Francisco-headquartered startup, headed by a former Airbnb employee with visions of grandeur, has quickly set the industry standard in a number of subsectors.
The firm may have started as a consumer-centric exchange, which sported a simple (near-)one-click interface, but Coinbase has evolved far beyond its original premise now. And interestingly, even as digital assets like Bitcoin (BTC), Ethereum (ETH), and Litecoin (LTC) — Coinbase’s lifeblood — continue to lose value, the firm has only doubled-down on its expansion and development efforts.
Coinbase Outperformed The Bitcoin Sell-Off
Recent Giving Pledge signee Brian Armstrong, the fervent, sometimes controversial chief of Coinbase, recently issued a note to his underlings — a swelling group of talent — accentuating the fact that the company has not only survived but thrived in the recent bearish downturn.
The American firm, which now has offices around the globe, started Q4 of 2018 with a bang, securing $300 million in funding from Tiger Global, Y Combinator, A16Z, Polychain Cap, and a number of other crypto-friendly venture groups. This round valued Coinbase at a jaw-dropping $8 billion, making the firm arguably the most valuable company in the entirety of Bitcoin ecosystem.
And since that $300 million cash boost, which was explained to be allocated towards global expansion efforts, institutional services, and applications for crypto, Coinbase has arguably been on the up-and-up. As explained in Armstrong’s letter, released to the public in an evident attempt at transparency, Coinbase launched a number of pertinent products, including support for Circle-backed USD Coin, a revamped version of Earn, PayPal withdrawals, and crypto-to-crypto trading, to only name a few products.
The firm also added a dozen crypto assets to its platform, an evident sign of changing times, with notable additions including ZCash (ZEC), Basic Attention Token (BAT), Maker (MKR), and 0x (ZRX). In a podcast, vice-president Dan Romero explained that firm’s clientele has begun to clamor for crypto asset support, presumably catalyzing the recent listings.
Along with adding the aforementioned tokens and products, Coinbase forayed into six new regions, opening the ground-breaking potential of crypto to millions more. The Coinbase chief also explained that his firm made a number of investments, into organizations such as Alchemy, Securitize, Starkware, Nomics, and Abacus.
Closing the retrospective post, Armstrong made his excitement and gratitude more than apparent when he wrote:
“I continue to be so impressed by the ability of this team to execute on aggressive timelines, all while solving problems that have never been solved before. This was a year of scaling Coinbase up to meet the demand of the market and efficiently executing to serve our customers.”
Great Year Ahead For The Crypto Juggernaut
Interestingly, the firm already seems to have prospects for a great 2019. As reported by NewsBTC earlier today, an apparent survey from Coinbase has polled users on the appeal of a subscription model, which would reduce “maker” and “taker” fees for Pro traders, while offering perks for premium members. If implemented, this program would be the first of its kind in the cryptosphere, and would likely propel the company’s trading platforms to new heights.
Asiff Hirji, president of the fledgling company, recently hinted that 2019 will be a great year for institutional participation in cryptocurrencies. In an interview with CNBC, Hirji explained that Coinbase’s custodial service “has blown by internal goals,” as “hundred of institutions” have boarded onto the platform in recent memory. Seeing that Coinbase has been playing a role in that facet of this industry, it can be assumed that this influx of Wall Street hotshots will trickle down to the company’s growing roster of institutional products.
Zeeshan Feroz, the chief at Coinbase’s U.K. branch, also expressed a similar positive outlook, but from a broader perspective. He said:
“I think you can expect a more aggressive approach to us adding more countries in the coming months. Much of what we’re doing here is driven by customer needs and what we’re seeing in the market… I think if you look at last year, a lot of the focus was on people who bought crypto from an investment point of view and a lot of projects raised a ludicrous amount of money as a result of that.”
Featured Image from Shutterstock |
/*
* bsp.h: Header for bsp.cpp
*/
#ifndef _BSP_H
#define _BSP_H
#define FINENESS 1024
#define NUMDEGREES 4096
#define BLAK_FACTOR 16 // Multiply by this to convert roomeditor coordinates to client coordinates
// Convert client coordinates to roomeditor coordinates
#define FinenessClientToKod(x) ((x) / BLAK_FACTOR)
/* Bit flags for linedef characteristics in editor */
/* Important: if you add +/- flags here, you must add the appropriate line to
* bspmake.cpp/BSPFlipWall.
*/
#define BF_POS_BACKWARDS 0x00000001 // Draw + side bitmap right/left reversed
#define BF_NEG_BACKWARDS 0x00000002 // Draw - side bitmap right/left reversed
#define BF_POS_TRANSPARENT 0x00000004 // + side bitmap has some transparency
#define BF_NEG_TRANSPARENT 0x00000008 // - side bitmap has some transparency
#define BF_POS_PASSABLE 0x00000010 // + side bitmap can be walked through
#define BF_NEG_PASSABLE 0x00000020 // - side bitmap can be walked through
#define BF_MAP_NEVER 0x00000040 // Don't show wall on map
#define BF_MAP_ALWAYS 0x00000080 // Always show wall on map
#define BF_POS_NOLOOKTHROUGH 0x00000400 // + side bitmap can't be seen through even though it's transparent
#define BF_NEG_NOLOOKTHROUGH 0x00000800 // - side bitmap can't be seen through even though it's transparent
#define BF_POS_ABOVE_BUP 0x00001000 // + side above texture bottom up
#define BF_NEG_ABOVE_BUP 0x00002000 // - side above texture bottom up
#define BF_POS_BELOW_TDOWN 0x00004000 // + side below texture top down
#define BF_NEG_BELOW_TDOWN 0x00008000 // - side below texture top down
#define BF_POS_NORMAL_TDOWN 0x00010000 // + side normal texture top down
#define BF_NEG_NORMAL_TDOWN 0x00020000 // - side normal texture top down
#define BF_POS_NO_VTILE 0x00040000 // + side no vertical tile
#define BF_NEG_NO_VTILE 0x00080000 // - side no vertical tile
// scrolling texture flags come next
#define WallScrollPosSpeed(flags) ((BYTE)(((flags) & 0x00300000) >> 20))
#define WallScrollPosDirection(flags) ((BYTE)(((flags) & 0x01C00000) >> 22))
#define WallScrollNegSpeed(flags) ((BYTE)(((flags) & 0x06000000) >> 25))
#define WallScrollNegDirection(flags) ((BYTE)(((flags) & 0x38000000) >> 27))
/* Bit flags for sidedef characteristics in roo file */
#define WF_BACKWARDS 0x00000001 // Draw bitmap right/left reversed
#define WF_TRANSPARENT 0x00000002 // normal wall has some transparency
#define WF_PASSABLE 0x00000004 // wall can be walked through
#define WF_MAP_NEVER 0x00000008 // Don't show wall on map
#define WF_MAP_ALWAYS 0x00000010 // Always show wall on map
#define WF_NOLOOKTHROUGH 0x00000020 // bitmap can't be seen through even though it's transparent
#define WF_ABOVE_BOTTOMUP 0x00000040 // Draw upper texture bottom-up
#define WF_BELOW_TOPDOWN 0x00000080 // Draw lower texture top-down
#define WF_NORMAL_TOPDOWN 0x00000100 // Draw normal texture top-down
#define WF_NO_VTILE 0x00000200 // Don't tile texture vertically (must be transparent)
// Texture scrolling constants
#define SCROLL_NONE 0x00000000 // No texture scrolling
#define SCROLL_SLOW 0x00000001 // Slow speed texture scrolling
#define SCROLL_MEDIUM 0x00000002 // Medium speed texture scrolling
#define SCROLL_FAST 0x00000003 // Fast speed texture scrolling
#define SCROLL_N 0x00000000 // Texture scroll to N
#define SCROLL_NE 0x00000001 // Texture scroll to NE
#define SCROLL_E 0x00000002 // Texture scroll to E
#define SCROLL_SE 0x00000003 // Texture scroll to SE
#define SCROLL_S 0x00000004 // Texture scroll to S
#define SCROLL_SW 0x00000005 // Texture scroll to SW
#define SCROLL_W 0x00000006 // Texture scroll to W
#define SCROLL_NW 0x00000007 // Texture scroll to NW
#define WallScrollSpeed(flags) ((BYTE)(((flags) & 0x00000C00) >> 10))
#define WallScrollDirection(flags) ((BYTE)(((flags) & 0x00007000) >> 12))
/* Bit flags for sector characteristics */
#define SF_DEPTH0 0x00000000 // Sector has default (0) depth
#define SF_DEPTH1 0x00000001 // Sector has shallow depth
#define SF_DEPTH2 0x00000002 // Sector has deep depth
#define SF_DEPTH3 0x00000003 // Sector has very deep depth
#define SF_SCROLL_FLOOR 0x00000080 // Scroll floor texture
#define SF_SCROLL_CEILING 0x00000100 // Scroll ceiling textire
#define SectorDepth(flags) ((flags) & 0x00000003) // Retrieve depth bits from sector flags
#define SectorScrollSpeed(flags) ((BYTE)(((flags) & 0x0000000C) >> 2))
#define SectorScrollDirection(flags) ((BYTE)(((flags) & 0x00000070) >> 4))
#define SF_FLICKER 0x00000200 // Flicker light in sector
#define SF_SLOPED_FLOOR 0x00000400 // Sector has sloped floor
#define SF_SLOPED_CEILING 0x00000800 // Sector has sloped ceiling
#define ABS(x) ((x) > 0 ? (x) : (-(x)))
#define SGN(x) ((x) == 0 ? 0 : ((x) > 0 ? 1 : -1))
/* plane defined by ax + by + c = 0. (x and y are in fineness units.) */
typedef struct
{
long a, b, c;
} Plane;
/* box defined by its top left and bottom right coordinates (in fineness) */
typedef struct
{
long x0,y0,x1,y1;
} Box;
typedef struct
{
long x,y;
} Pnt;
typedef struct WallData
{
union {
struct WallData *next; /* next in list of polys coincident to separator plane */
int next_num; /* number of next wall (used during loading) */
};
int num; /* Ordinal # of this wall (1 = first wall) */
int pos_type; /* bitmap to tile wall with on positive side */
int neg_type; /* bitmap to tile wall with on negative side */
int flags; /* characteristics of wall (transparency, left/right swap) */
int pos_xoffset; /* X offset of + side bitmap */
int neg_xoffset; /* X offset of - side bitmap */
int pos_yoffset; /* Y offset of + side bitmap */
int neg_yoffset; /* Y offset of - side bitmap */
int pos_sector; /* Sector # on + side */
int neg_sector; /* Sector # on - side */
int x0, y0, x1, y1; /* coordinates of wall start and end */
int length; /* length of wall; 1 grid square = 64 */
int z0; /* height of bottom of lower wall */
int z1; /* height of top of lower wall / bottom of normal wall */
int z2; /* height of top of normal wall / bottom of upper wall */
int z3; /* height of top of upper wall */
WORD server_id; /* User-id of wall */
WORD pos_sidedef; /* Sidedef # for + side of wall */
WORD neg_sidedef; /* Sidedef # for - side of wall */
WORD linedef_num; /* linedef # this wall came from; used for debugging */
/* (x0,y0) and (x1,y1) must satisfy separator plane equation */
/* positive side of wall must be on right when going from 0 to 1 */
} WallData, *WallList, *WallDataList;
typedef struct
{
Plane separator; /* plane that separates space */
union {
WallList walls_in_plane; /* any walls that are conincident to separator plane */
int wall_num; /* number of first wall in list (used during loading) */
};
union {
struct BSPnode *pos_side; /* stuff on ax + by + c > 0 side */
int pos_num; /* number of node on + side (used during loading) */
};
union {
struct BSPnode *neg_side; /* stuff on ax + by + c < 0 side */
int neg_num; /* number of node on + side (used during loading) */
};
} BSPinternal;
#define MAX_NPTS 100
typedef struct {
int npts; /* # of points in polygon */
Pnt p[MAX_NPTS+1]; /* points of polygon (clockwise ordered, looking down on floor) */
/* invariant: p[npts] == p[0] */
} Poly;
typedef struct BSPleaf
{
long tx,ty; /* coordinates of texture origin */
Poly poly; /* Polygon of floor area */
int floor_type; /* Resource # of floor texture */
int ceil_type; /* Resource # of ceiling texture */
int floor_height; /* Height of floor relative to 0 = "normal" height */
int ceiling_height; /* Height of ceiling relative to 0 = "normal" height */
BYTE light; /* Light level in leaf */
WORD server_id; /* User-id of wall */
int sector; /* Sector # of which leaf is a part */
} BSPleaf;
typedef enum
{
BSPinternaltype = 1,
BSPleaftype
} BSPnodetype;
typedef struct BSPnode
{
BSPnodetype type; /* type of BSP node */
Box bbox; /* bounding box for this node */
int num; /* Ordinal # of this node (1 = first node) */
union
{
BSPinternal internal;
BSPleaf leaf;
} u;
} BSPnode, *BSPTree;
int BSPGetNumNodes(void);
int BSPGetNumWalls(void);
BSPnode *BSPBuildTree(WallData *wall_list, int min_x, int min_y, int max_x, int max_y);
WallData *BSPGetNewWall(void);
BSPTree BSPFreeTree(BSPTree tree);
void BSPDumpTree(BSPnode *tree, int level);
BSPTree BSPRooFileLoad(char *fname);
void BSPTreeFree(void);
BYTE ComputeMoveFlags(BSPTree tree, int row, int col, int rows, int cols,
int min_distance);
BYTE ComputeSquareFlags(BSPTree tree, int row, int col, int rows, int cols);
#endif /* #ifndef _BSP_H */
|
HostOnNet Blog
Chinese Smartphone maker Xiaomi has roped in former Google executive, Jai Mani to head its products team in India, where the Chinese company is competing with global and domestic firms for a slice of the multi-billion smartphone market.
Xiaomi Global Vice President Hugo Barra said Mani has just joined the Mi India team as lead product manager.
“Jai has relocated all the way from SF to Bangalore and hit the ground running on his first day co-hosting a Mi fan meet-up with his partner in crime +Rohit Ghalsasi,” he added.
Barra, who himself is a former Google employee, said Android fans would remember Jai from his “memorable on-stage demo performances at Google I/O and Nexus launches”.
According to Mani’s LinkedIn profile, he had been a Google Play Strategy and Analytics associate before he co-founded a startup.
Xiaomi, which is a relatively new player in the multi-billion smartphone market in the country, sells its products in India exclusively through e-commerce website Flipkart.com.
Apart from global giants like Samsung and Nokia, Xiaomi also competes with domestic players like Micromax and Karbonn as well as other Chinese companies like Oppo and Gionee.
So far, it has launched two devices in the Indian market — Mi 3 and Redmi 1S.
About sherly
I am Sherly, living in the city of Alappuzha, which known as the 'Venice of the East'. |
Q:
How can I apply a custom scalar function elementwise to a matrix in math.js?
Consider the matrix m:
let m = [ [ 1 , 2 ] , [ 3 , 4 ] ]
Apply the exponentiation function to m:
let mexp = math.exp(m)
Now JSON.stringify(mexp) outputs:
"[[2.718281828459045,7.38905609893065],[20.085536923187668,54.598150033144236]]"
So the built in exponentiation function was applied elementwise to the matrix and the result is a matrix.
Let's say I have a custom scalar function sigmoid:
let sigmoid = x => 1 / ( 1 + Math.exp(-x) )
Now I would like to apply sigmoid elementwise to the matrix as if it was a math.js built in function:
math.sigmoid(m)
How can I implement this?
A:
you could simply use math.map, and customize sigmoid to work with map
math.map(m, sigmoid)
more here http://mathjs.org/docs/reference/functions/map.html
|
Q:
DDD => behaviour in root aggregate : instanciate other root aggregate
I have 2 root aggregate :
- invoice
- complaint
And I have a rule who say : "I can't delete an invoice if a complaint is opened on it".
On my delete behaviour on invoice aggregate I want to check if a complaint exist like :
Complaint complaint = ComplaintRepository.findByInvoiceId(invoiceId);
if(complaint.isOpened) {
throw new Exception("Open Complain...");
}
else{
...
}
My collegues and I are disagree on this.
They said to me that I can't instanciate a Complaint in my behaviour since Complaint is not in my Aggregate.
My opinion is that I can't have a Complaint attribute in Invoice Class, but :
- I can refered one with a Value Object (they are ok with this)
- I can read/load an instance since I did not call behaviour on it...
Do you have an opinion on this ?
A:
Technically you can do what you're proposing: from a certain point of view, if you're injecting a ComplaintRepository interface into the invoice, either through constructor injection or method injection, you're making the Invoice dependant on the contracts of both the Repository and and the Complaint and that's pretty much allowed.
You are right when you say you can't hold a reference to the complaint, but you can inject DDD artifacts (such as factories/repositories/entities) into operations when they're needed to run.
However the main point you must ask yourself is: do you really want this level of coupling between two distinct aggregates? At this point, they're so coupled together they mostly can't operate without one and the other.
Considering all of this, you might be into the scenario where the complaint might just be a part of the invoice aggregate (although your invoice aggregate probably has other responsibilities and you will start to struggle with the "Design Small Aggregates" goal). If you think about it, that's what the invariant "I can't delete an invoice if a complaint is opened on it" is proposing.
If for all means it's not practical for you to model the complaint as part of the invoice aggregate, you have some other options:
Make these aggregates eventually consistent: instead of trying to delete the invoice in "one shot", mark it as flagged for deletion in one operation. This operation triggers some sort of Domain Event in you messaging mechanism. This event "InvoiceFlaggedForDeletion" will then check for complaints on the Invoice. If you have no complaints, you delete it. If you have complaints, you rollback the Deletion Flag.
Put the deletion process in a Domain Service. That way, the Domain Service will coordinate the efforts of checking for complaints and deleting the invoice when appropriate. The downside of this approach is that your Invoice entity will be less explicit about it's rules, but DDD-wise this sometimes is an acceptable approach.
|
Quick shots – Dead or Alive 5 screens show Rig, Bass
Koei Tecmo has released a set of screenshots to go along with the trailer of Rig released earlier today. You’ll also notice a render of Bass in the mix below as well. Dead or Alive 5 hits PS3 and Xbox 360 in late September. |
TEXAS COURT OF APPEALS, THIRD DISTRICT, AT AUSTIN
NO. 03-03-00063-CR
NO. 03-03-00064-CR
NO. 03-03-00065-CR
Julius Drew, Sr., Appellant
v.
The State of Texas, Appellee
FROM THE DISTRICT COURT OF TRAVIS COUNTY, 167TH JUDICIAL DISTRICT
NOS. 3011431, 9020800 & 9020932, HONORABLE MICHAEL LYNCH, JUDGE PRESIDING
M E M O R A N D U M O P I N I O N
In district court cause number 3011431, appellant Julius Drew, Sr., pleaded guilty to
assault with bodily injury, a lesser included offense of the sexual assault alleged in the indictment.
See Tex. Pen. Code Ann. § 22.01(a)(1) (West 2003). As called for in a plea bargain agreement, the
court assessed punishment at incarceration in county jail for one year, probated for two years. In
cause number 9020800, Drew pleaded guilty to recklessly causing bodily injury to an elderly person.
See id. § 22.04(a)(3). As called for in the agreement, the court assessed punishment at incarceration
in a state jail for two years, probated for five years. In cause number 9020932, Drew pleaded guilty
to the unauthorized practice of law, a lesser included offense of holding oneself out as a lawyer. See
id. § 38.123(a)(1). The court again followed the agreement and assessed punishment at incarceration
in county jail for one year, probated for two years.
At the time the sentences in these causes were suspended and the notices of appeal
were filed, it was necessary for a notice of appeal from a bargained guilty plea to state that the appeal
was for a jurisdictional defect, or that the substance of the appeal was raised by written motion and
ruled on before trial, or that the trial court granted permission to appeal. See former Tex. R. App.
P. 25.2(b)(3) (since amended). The notices of appeal in these causes stated, in effect, that the court
promised Drew that he could appeal. The records do not support this statement, however, and at a
subsequent hearing on Drew's request for permission to appeal the court denied its permission.
Drew's briefs do not raise a jurisdictional issue, nor do they complain of a ruling on
a pretrial motion. Without the trial court's permission to appeal, the issues raised in the briefs are
not properly before us. Woods v. State, 108 S.W.3d 314, 316 (Tex. Crim. App. 2003); Whitt v. State,
45 S.W.3d 274, 275 (Tex. App.--Austin 2001, no pet.).
The appeals are dismissed. (1)
__________________________________________
Mack Kidd, Justice
Before Chief Justice Law, Justices Kidd and Puryear
Dismissed
Filed: October 2, 2003
Do Not Publish
1. Drew's motion to dismiss appellee's brief is overruled.
|
A controversial post criticizing Harjit Sajjan's appointment as an example of affirmative action has been pulled from a Facebook page run by the Conservative Party of Canada's Kelowna-Lake Country Electoral District Association.
The post was connected to a news article about a Canadian Forces Flight in 2017 that reportedly cost $337,000 and featured a photo of Sajjan.
"This is what happens when you have a cabinet based on affirmative action," the comment above the article read.
The post was deleted on Thursday following criticism on social media platforms that the comment was racist in tone.
Kelowna-Lake Country Liberal MP Stephen Fuhr called out the post on Twitter.
"I am not ok with what this implies. You are an 'Okanagan Conservative' @DanAlbas are you?," he wrote.
I am not ok with what this implies. You are an “Okanagan Conservative” <a href="https://twitter.com/DanAlbas?ref_src=twsrc%5Etfw">@DanAlbas</a> are you? <a href="https://t.co/hhA0FwE8g7">https://t.co/hhA0FwE8g7</a><a href="https://twitter.com/CastanetNews?ref_src=twsrc%5Etfw">@CastanetNews</a> <a href="https://twitter.com/KelownaNow?ref_src=twsrc%5Etfw">@KelownaNow</a> <a href="https://twitter.com/AM1150?ref_src=twsrc%5Etfw">@AM1150</a> <a href="https://twitter.com/GlobalOkanagan?ref_src=twsrc%5Etfw">@GlobalOkanagan</a> <a href="https://twitter.com/infonewskelowna?ref_src=twsrc%5Etfw">@infonewskelowna</a> <a href="https://twitter.com/KelownaCapNews?ref_src=twsrc%5Etfw">@KelownaCapNews</a> <a href="https://twitter.com/KelownaCourier?ref_src=twsrc%5Etfw">@KelownaCourier</a> <a href="https://t.co/fm4wEAa3NS">pic.twitter.com/fm4wEAa3NS</a> —@FuhrMP
MP apologizes
Albas, Conservative MP for Central Okanagan-Similkameen-Nicola, responded by tweeting he also found the post inappropriate.
"Stephen, although there is much that we may disagree about, on this point, I am in full agreement with you. This FB post came from another EDA account, and I have asked that it be removed. Although I did not authorize it, I would like to apologize to @HarjitSajjan," he wrote.
Albas told CBC News on Friday he contacted the Kelowna-Lake Country Electoral District Association about the comment.
The post was taken down and replaced with an apology to Minister Sajjan on Thursday afternoon. The apology has since been removed from the Facebook page.
Stephen, although there is much that we may disagree about, on this point I am in full agreement with you. This FB post came from another EDA account and I have asked that it be removed. Although I did not authorize it, I would like to apologize to <a href="https://twitter.com/HarjitSajjan?ref_src=twsrc%5Etfw">@HarjitSajjan</a> <a href="https://t.co/5yFdjLPvsC">https://t.co/5yFdjLPvsC</a> —@DanAlbas
The offending comment was made by a volunteer with the electoral district association who has since been removed from the role of posting to social media for the association, said Albas.
The volunteer was a young Indo-Canadian man, who was disappointed with how Sajjan was fulfilling his role as defence minister, according to Albas, who stressed the post was a mistake in judgment and not an indication that racist views are held by Conservative party volunteers.
"I don't like this kind of rhetoric. I don't stand for it whether it be in my own riding association or an adjacent one or right across the country." Albas said.
"That being said, I think it's important that we recognize that people will make mistakes on social media but there should be consequences and in this case there was."
Conservative Party of Canada's Director of Communications Cory Hann responded in an email to CBC News denouncing the post.
"It obviously doesn't reflect the view of the party. It's inappropriate, shouldn't have been posted, and those responsible have been removed from the Facebook group," Hann wrote.
On Friday, Sajjan said he was surprised and disappointed by the post but appreciated the apology.
"This type of language has no place in Canada," Sajjan said. "I was heartened also at the same time to see that Conservative MPs have stepped forward, apologized and denounced this type of wording and discussion." |
Wallbrook
Wallbrook may refer to:
Places
Wallbrook, Dudley in the West Midlands, England
Wallbrook Primary School in Coseley, West Midlands, England
Fiction
Wallbrook, a fictitious mental institution in the 1988 American drama film Rain Man
See also
Walbrook (disambiguation) |
Amy Schumer‘s new movie Trainwreck draws heavily from her own life, but there’s one embarrassing incident that didn’t make it to the big screen: her arrest for grand larceny shoplifting!
Schumer has previously admitted that she had sticky fingers well into her adult years. “It was just all from, like, department stores,” she told Rolling Stone last year. “I guess the impressive part was that I would return it for cash, and the most that I’d ever made was probably around $1,000. But I did it a lot.”
When she was 21, however, her bad habit caught up to her, and she and sister Kim Schumer — who is now a writer on her show — were arrested during one foiled heist.
PHOTOS: Amy Schumer Transforms Into Princess Leia For Sexy GQ Cover — Plus, See Her Bikini Bod
“We were arrested for grand larceny,” the Inside Amy Schumer star admitted, but her relationship with uncle Charles Schumer, a New York Senator, came in handy.
She said, “[The cops] were like, ‘You’re lucky you have this last name.’ And they pleaded it down, to like disturbing the peace or something.”
The case is so old that no publicly available records remain in New York, but Schumer, now 34, has said the incident almost torpedoed her chance at fame.
PHOTOS: The Sexiest Funny Ladies
“I really thought that sh*t was off the record, until I was on Last Comic [Standing, in 2007] and they asked me if I had any arrests,” she told Laughspin in 2011, “and I was like, ‘Eh, not really.’ They have real lawyers … and they knew everything I’d stolen.” |
VALSAD/JUNAGADH: In a veiled attack on Rajiv Gandhi , who had famously admitted in the 1980s that only 15 paise of a rupee earmarked for the poor by the government reach them, PM Narendra Modi on Thursday said "poora rupaiya (every paisa of a rupee)" released by the Centre reaches the beneficiaries now.
Stressing that corruption-free governance has been the hallmark of his government's four-year rule, Modi told a public gathering in Jujwa, a tribal village near Valsad, "Middlemen have been eliminated. If one rupee is released by Delhi, the entire 100 paise (now) reach the poor."
After witnessing a house warming ceremony by beneficiaries of the PM Awaas Yojna-Gramin in 26 districts of Gujarat via videoconferencing, Modi said, "Seeing the quality of houses, people are surprised that they have been built under a government scheme. This has been possible just because katki (corruption) company has been forced to close down."
Interacting with women house owners in various districts, Modi even asked a few of them if they had to pay any bribe to the middlemen to get the amount of around Rs 1.5 lakh under the scheme. The women said they didn't have to pay anyone to avail of the benefits.
|
Thursday, March 18, 2010
Yoshihiro Togashi continues to be both a source of good news and bad for Hunter x Hunter fans. His anecdote accompanying issue 14 of Hunter x Hunter shows promise:
My hay fever has lightened each year.I think I'm recovering
Some fans hope that this windfall will free him more time for HxH, but other signs point to bad news.
Issue 15 ended with this quote:
I saw my Senreki.But I'm annoyed by the fairy that accompanies me.
We're almost positive he's referring to Dragon Quest IX again. Senreki is the combat log screen, showing a list of your defeated monsters as well as other "collectibles" for those who like to obtain a 100% completion. The fairy is the much maligned, ganguro-esque fairy guide Sandy, whom everyone loves to hate.
Fans do get a little treat from Shounen Jump very soon. For three weeks, starting March 20th, the magazine is having a special fair at all Animate locations. All of their products will be on sale and with each purchase, customers get one of 40 clear bookmarks with art from their most popular series, including Hunter x Hunter! You can check out the HxH bookmark as well as the other 39 here! |
Echocardiographic prediction of volume responsiveness in critically ill patients with spontaneously breathing activity.
In hemodynamically unstable patients with spontaneous breathing activity, predicting volume responsiveness is a difficult challenge since the respiratory variation in arterial pressure cannot be used. Our objective was to test whether volume responsiveness can be predicted by the response of stroke volume measured with transthoracic echocardiography to passive leg raising in patients with spontaneous breathing activity. We also examined whether common echocardiographic indices of cardiac filling status are valuable to predict volume responsiveness in this category of patients. Prospective study in the medical intensive care unit of a university hospital. 24 patients with spontaneously breathing activity considered for volume expansion. We measured the response of the echocardiographic stroke volume to passive leg raising and to saline infusion (500 ml over 15 min). The left ventricular end-diastolic area and the ratio of mitral inflow E wave velocity to early diastolic mitral annulus velocity (E/Ea) were also measured before and after saline infusion. A passive leg raising induced increase in stroke volume of 12.5% or more predicted an increase in stroke volume of 15% or more after volume expansion with a sensitivity of 77% and a specificity of 100%. Neither left ventricular end-diastolic area nor E/Ea predicted volume responsiveness. In our critically ill patients with spontaneous breathing activity the response of echocardiographic stroke volume to passive leg raising was a good predictor of volume responsiveness. On the other hand, the common echocardiographic markers of cardiac filling status were not valuable for this purpose. |
Metro
World
Spiritual
Monday, 3 October 2016
Trump questions Hillary Clinton’s loyalty to her husband
Weakened by damning revelations about his taxes, Republican Donald Trump has intensified his personal attacks on Hillary Clinton as he scrambled Monday to counter his rival’s substantial gains in the White House battle.
Trump broke new ground in the violence of his personal attacks on Clinton at the weekend, mocking her for coming down with pneumonia last month and even openly questioning her loyalty to her husband.
“Here’s a women who’s supposed to fight trade deals in China… she’s supposed to fight all of these different things, and she can’t make it 15 feet to her car. Give me a break,” Trump said Saturday night in Manheim, Pennsylvania as he imitated Clinton stumbling into her vehicle during a 9/11 ceremony in New York.
“Hillary Clinton’s only loyalty is to her financial contributors and to herself,” he said.
“I don’t even think she’s loyal to Bill, if you want to know the truth. And really, folks, really, why should she be, right?” said the Manhattan billionaire, who has revived talk of Bill Clinton’s past infidelities in the wake of his lacklustre performance in last week’s debate.
The Democrat Clinton has surged in recent polling following their first debate, pushing the brash billionaire Trump onto his heels with just 36 days to go before the November 8 election.
The pair were visiting America’s battleground states Monday: Trump addressed military veterans in Virginia before a rally later in Colorado, while his rival Clinton was travelling to key swing state Ohio.
Following what Clinton’s campaign described as “his worst week yet” — culminating with the leak of documents suggesting he may have paid no income tax for two decades — Trump revived his attacks on the former secretary of state’s handling of classified information via a “basement” email server.
“Hillary Clinton’s only experience in cyber-security involves her criminal scheme to violate federal law, engineering a massive cover-up and putting the entire nation in harm’s way,” he said.
– ‘Good at business?’ –
Even as he launched the contentious new attacks, a defiant Trump campaign dodged swirling questions about his tax record.
Without admitting fault, Trump’s top allies praised their candidate’s business acumen following the bombshell revelations by The New York Times focusing on the real estate mogul’s massive 1995 losses and his clever use of the US tax code.
If true, the report is proof of the tycoon’s “absolute genius,” said former New York mayor Rudy Giuliani, a key Trump surrogate.
“You have an obligation when you run a business to maximize the profits and if there is a tax law that says I can deduct this, you deduct it,” Giuliani told ABC News Sunday.
According to documents obtained by the Times, Trump declared a loss of $916 million on his 1995 tax return, enabling him to legally avoid paying taxes for up to 18 years.
Trump has refused to release his tax returns, something US presidential candidates have done for four decades.
In their September 26 presidential debate, Clinton suggested that Trump is hiding “something terrible” by failing to produce his tax returns, and suggested that he had not paid any federal income tax.
Trump’s answer: “That makes me smart.”
He reportedly took massive, though legal, tax breaks on failing businesses, earning millions while shareholders and investors swallowed the losses and contractors went unpaid.
Clinton seized on the Times report as undercutting Trump’s core argument: that he is an iconic business success whose acumen can translate into positive action in the White House.
“Can a man who lost $1 billion in one year, stiffed small businesses, and may have paid no taxes really claim he’s ‘good at business’?” she tweeted Monday.
The tax scandal marked a low point in a bruising week for Trump in which he lost the momentum gained over the previous month and was seen as having stumbled in the debate.
An ABC News/Washington Post poll Sunday said 53 percent of Americans saw Clinton as the debate winner, compared to 18 percent for Trump.
A nationwide poll released Monday by Politico and Morning Consult showed Clinton with 42 percent support from likely voters compared to 36 percent for Trump, a four-point Clinton gain from the previous week.
Tuesday will see the vice presidential nominees clash in their only debate of the election cycle, with Republican Mike Pence, the governor of Indiana, and Democratic Senator Tim Kaine of Virginia tangling on issues likely to include abortion, climate change and trade — before the nominees themselves face off in their second debate on Sunday. |
This subproject is one of many research subprojects utilizing the resources provided by a Center grant funded by NIH/NCRR. The subproject and investigator (PI) may have received primary funding from another NIH source, and thus could be represented in other CRISP entries. The institution listed is for the Center, which is not necessarily the institution for the investigator. The RCMI-supported core shared research facilities at the Center for Study of Gene Structure and Function at Hunter College provide essential services to support the first-class research of our scientists. It is imperative that we improve the strengths of the facilities by constantly updating the equipment and expertise of the managers and users. The facilities also allow us to compete for topnotch scientists, highly qualified graduate students and post-doctoral researchers who contribute to our research output. An environment where researchers compete successfully for outside funding is critical to our overall aims, and with the addition of new and planned facilities we will be able to better address the problem of health disparities in the United States. In the next grant period, we plan to hire essential personnel and to upgrade the equipment in the following facilities: Bioimaging, Animal Care, Genomics, Internet2 and the Gene Center Local Area Network (LAN). Our plans include improving these core shared research facilities as well as the Fluorescence Activated Cell Sorting (FACS), the Nuclear Magnetic Resonance (NMR) and the X-Ray Diffraction Facilities. These improvements require the following: (a) supporting service contracts with grant funds and users'fees;(b) upgrading the skills of facilities'managers through advanced training, continuing education, external collaborations and professional certifications;and (c) upgrading standard operating procedures, record-keeping, manuals, and websites. Improving our core facilities will help us to maximize our scientific output and attain the goals of the RCMI Program. |
Identification and differentiation between colored pencils.
With pencils of any color, the first step certainly involves visual examination to distinguish between the color tints and to study the quality of the stroke itself. In many instances by this step alone, 2 pencils can be distinguished if they are the product of 2 different manufacturers. In other words, the most useful of all tests is the visual examination. In the case of red pencils, infrared luminescence reveals significant information and should be resorted to in those instances where two questioned strokes are extremely similar. Study under ultraviolet radiation may also help to establish similarities or differences which are not readily discernible by visual examination. However, reflected infrared examination and study with dichroic filters have no particular value when dealing with red pencil although chemical spot testing may be of some assistance. In the case of blue pencils, a number of brands absorbs infrared radiation to a different extent and some give off distinctive infrared luminescence, so that these 2 tests can assist in distinguishing between certain similar colors or tints. Also, study through dichroic filters has value. Thus, the combination of these 3 tests can be of some advantage, but in most instances, particularly with dark blues, examination under ultraviolet radiation is not particularly helpful. Chemical spot testing has some limited advantages but generally only in combination with all other tests. In the case of green pencils, the same pattern of testing may be used as with blues. Many of the greens absorb infrared rays and some have bright infrared luminescence. Ultraviolet radiation can cause certain greens to fluoresce in a distinctive way. Thus, with each color studied, differences can be revealed and similarities established. Colored pencils are a distinct group of writing instruments. Within any color classification, there is a variety of shades or tints. Although it is not psosible to determine the make of pencil used in any particular writing, it is possible under many circumstances to distinguish between the work of different makes and grades of red, blue and green pencils, as well as other colors which have not been covered by this paper. While visual examination will separate the many different makes, other tests are described which will further assist in grouping or separating colored writing strokes. For a number of reasons, and particularly because of manufacturing procedures of different companies, not every make of pencil is distinctive, but there has been found to be a definite variety within each color group, and many makes are distinguishable. |
The invention relates to a contact-free plate conveyor particularly for glass plates.
In the float glass production, the glass web which has been formed and solidified on a liquid metal bath is carried away in a well-known manner by plate conveyors which are generally roller conveyors on which the glass web is moved with its bath side disposed on the rollers. During movement, the finished glass web is cut longitudinally and transversely into glass plates of the desired format and these glass plates are also moved by plate conveyors to stacking station where the plates are removed from the conveyor.
By its contact with the rollers during its movement over the rollers, small traces remain on the support side of the glass. These traces are normally not noticeable but are objectionable for some applications and therefore detrimentally affect the quality of the glass surface.
It is the object of the present invention to provide a conveyor by which such traces inflicted by the mechanical contact between the glass surface and conveyor elements is avoided. |
Natt's Journal: fandom, fandom, fandom...
Fics! Recs! Yeah!
On whippersnappers and sex
July 22nd, 2004
I was looking at lasultrix's post about minors reading NC-17 material (and the comments within it) and was highly interested. But as I got to thinking about a response, I went off subject in my head. This doesn't pertain to the same things in Lasair's post, exactly; it's just an off-shooting opinion of what she and the commenters had to say.
---
Smart Devils or Ignorant Angels?Kids and sex and why people think they shouldn't mix
Have people been trained to believe that 18 is the ideal age to engage in "adult activities" because it's the age of majority in most places? Or do they sincerely believe it?
These days, when sex is mainstream entertainment, kids younger and younger are being exposed. As a result, more kids know what's smart and what's not about sex. So what's with people clinging to the idea of protecting their kids' pureness from the dirty-dirty? Kids are not "pure". They are horny little fuckers and they have sinful, nasty thoughts, just like everyone else.
I'm thinking part of it is that parents are afraid of their kids growing up. We don't want our puppy to turn into a doggy because then he's not cute anymore, just slobbery. Just the same, parents don't want their kids to think adult thoughts, or what they perceive as adult thoughts, because then they will not be little angels anymore. More bluntly---it's selfishness.
Not selfishness alone, though.
There is the general belief that sexuality is simply bad for children, in all its forms. The notion is unavoidable. It's in television programs, commercials, magazines, churches, books, posters, public schools. It's planted in the brain from an early age. It's a near unshakable ideology because the subject of children is a delicate one. And some people don't bother considering that an underage person might be capable of reading NC-17 material and coming out just fine, because it's just so scandalous. Even if they disagree, they can't tell their kids they don't mind them watching porn because they don't want to be a bad parent, do they? They don't want people to think that they're perverts, do they? So. Mouths shut, minds closed.
There is also the belief that sex is something sacred, something that should only be between two married people, and, of course, you can't do that until you're 18; if it's not within those circumstances, don't see sex, don't hear it, don't speak it.
When it comes down to it, sexuality is made out as something dangerous altogether. Perhaps the most dangerous thing in the world. Whether it's the act of sex or even the thought of sex. And to mix it with children, who are impressionable until the moment they turn 18, would be outrageous.
But, in my opinion, the most dangerous thing in the world is ignorance. I don't mean that in a PC, peace and love, you-have-to-celebrate-Kwanza-or-you're-a-racist sort of way. Just that keeping a person in a protective bubble does more damage than good.
Hiding sex from kids is censorship, and I oppose censorship in almost every form. I don't care how old the person in reference is. Honestly, when I hear BLEEP over the word "fuck" on television, when I see the blurred out middle-finger, when I hear about Janet Jackson's sinful breast, when I read list after list of books removed from student libraries for "indecency" or "racial insensitivity," I think of Communist Russia. An extreme reaction on my part, yes, but my reaction all the same.
The following is an example (from my life) about the results of censorship.
---
Mom: Do you know what sex is?
Natt has always been shielded from this subject and, as a result, concludes that Mother will be angry if she knows that Natt knows.
Mom: But you can always come to me if you have questions. Just don't have sex. Or watch anything containing it. Or talk about it.
Shortly thereafter:
Natt: *horny*
Instead of confiding in parents or reading an educational book or exploring personal genitalia, Natt sneaks into the living room late at night and watches porn channels.
Porn: I'm going to show you how to stick a large penis into this woman's anus.
Natt: *learns*
---
So, because I thought everyone disapproved of sex and didn't want me to know about it, mainly my parents, I never once confided in my mother about anything sexual. Kids don't want to get into trouble. It's the worst thing possible. If they sense disapproval, they may end up getting their education from less savory sources.
I don't mean just younger kids. Teenagers too. The ones who are reading your sexually explicit fanfic and, a lot of times, writing it. Both teens and children, and adults, really, would be better off not having to sneak around to enjoy things that come naturally. Society pushes the belief that sex is inappropriate, even while most everyone loves it and most everyone is interested in it. But it wouldn't be such a big deal if people stopped making it a big deal.
So. What I am trying to say is this: I think it's a compellation of pressures that makes a person believe underage people shouldn't be reading, watching, or having sex: mainly fear (of shocking family and friends, of losing jobs, of being confronted with legal matters), and also indoctrination and selfishness. These people should give it some more thought, in my opinion. After all, sex is as natural as breathing (and much safer on the screen than in the bed).
I read what you said in your journal, about it being an issue unique to every person; we agree there too. While I think sex is a bearable subject at any age, it should be up to the individual when they are ready for what.
There's a difference, though, between what I decide is OK for my kid and what other parents decide is OK for their kids. My son (who is 9) and I have been discussing sex for close to a year already. I don't have any problems with him reading sexy stuff when he gets to the age where I think he can handle it.
However, I have no right to tell other parents what's best for their children. If they don't want their kids reading porn, that's their decision and their affair. The only time it becomes my business is when other parents -- or the government -- freak out because their precious kiddies got exposed to something they don't want the kids to see. That's the *only* reason I care about the underage issue. I don't really give a damn what kids read. I'm not responsible for anyone's child but my own. But when we live in a country where ridiculous shit like this happens, it's best to at least give the appearance of prudence on an issue like this.
I was a family planning counselor for over 10 years and I've seen/heard soooo much from kids who don't a clue about their own sexuality. Many of these kids and young adults dive in dick first because they don't have anyone to talk to about the mechanics or, more importantly, the emotions involved in sex. As a parent it is my job to teach my children about their bodies - every function from A to Z. It is embarrassing to tell your nine year old what humping is? Yes, been there done that. He asked and I wasn't going to lie. As he gets older and his body continues to change it will surely get more embarrassing. But I don't want him coming to me some day and telling me he has gotten a girl pregnant or has HIV. Am I afraid that he'll use some new found piece of information to try something sexual when he gets older? Well a little, but I'm more afraid of his doing something without any information.
I saw what happens to kids who don't have a clue when their hormones finally kick into overdrive. Often times they learn by doing and end up making HUGE life altering mistakes in the process (pregnancy, STD, HIV, etc). It is a life-altering experience to have to tell a fifteen year old girl that she is pregnant. When she replies that she had no clue how it happened and her parents would kill her (some kids are over-dramatic about this and some are dead serious) your heart breaks. Try telling a 17 year old girl that she has precancerous lesions on her cervix from genital warts. I can't imagine having to be the one getting that news as it is bad enough giving it. Information is good, ignorance can kill or destroy someone's life.
What shocked me the most were the people who had sex in search of something that was not related to sex at all. These people would do ANYTHING for a partner thinking they would find love/acceptance/an escape/whatever. They may know all the facts but have so much emotional baggage that they chose to ignore it. A pamphlet and a chat will do little to help these people.
I think this post resonates with me at the moment because last night, I was watching a show about plastic surgery, and a poor unfortunate woman who was getting a badly needed breast reduction to go from a 38 II (yes, that's right - DOUBLE I BREASTS) to a slightly more manageable 38 E. And during the part of the show where the surgeon is marking her up before she's wheeled into the operating room, the camera blurs out her nipples.
And I stared at the blurry rings with complete incomprehension on my face. This poor woman's breasts were immense, well over the border of 'grotesque' and edging into 'downright freakish'. (The doctors ended up removing 5000 grams of tissue -- over 10 pounds -- from her breasts.) There was absolutely no way that you couldn't know what the doctor was marking up...and the television show STILL thought that her nipples were something that had to be censored.
Here from daily_snitch and I agree completely. Must be something in the air, because I independantly posted about this very topic earlier today. There is no ON/OFF switch that comes with voter registration that controls your libido.
There's a difference between putting out fires in your living room, and pretending the fire isn't there.
I just wanted to add to this that the reasons for 18 being set have very little to do with ideas about sexual development. 18 is used as the point of suffrage in many Western countries, but it was not set there because of ideas about sex or sexual identity. 18 is about suffrage and property/contract, and the relationship between suffrage and property laws and various educational discourses (including being sexually informed as distinct from sexually developed) is a fraught one. 18 became a kind of compromise, really.
snortsnortsnort! Exactly what happened to me. Sex was a non-existant item in our family - although it must have happened, sometime, somewhen, as we were five sisters. But my mother always pretended people didn't exist from the waist downwards. So I did the same as you, I learned from other sources. Sure it didn't damage me, but it could have been so much more comfortable and yes, easier, if it could have been openly discussed at home.
Yes, sex IS a dangerous thing, because it is such a powerful drive, but it would be far more harmless if it was handled in a more mature way. What makes it dangerous is the subpression of it, not the dealing openly with the subject, imo.
Why it is considered harmful for children to know about/read/watch/have sex WHEN THEY WANT it is beyond me and I have not yet found anyone able to answer this question to my satisfaction.
Why it is considered harmful for children to know about/read/watch/have sex WHEN THEY WANT it is beyond me and I have not yet found anyone able to answer this question to my satisfaction.
The simple answer to that is that kids don't always want things that are good for them. I mean, because a kid wants to have sweets before supper, should they get them? If they want to rollerskate down a cliffside, should you let them? Hell no.
Why kids shouldn't know about sex whenever they want? I dunno. Hell, I can remember back to when I was three years old, and I still don't remember being told about sex. It just always *was*, and I think that's the way it should be. However, as for reading (non-technical), watching, and doing? Sometimes you're just not ready for what you think you are. A ten year old might think they're ready, but that doesn't necessarily mean they are. Kids always think they're more grown up than they are, but don't realise it until they *are* more grown up. If that makes sense.
It really depends on the kid. The parent/guardian should be on the ball to educate the child, and I don't think reading/watching is something any kid would be ready for before they're 13 or 14. I have my own views on the 'doing' bit, but I think slapping '18' on it is stupid. How are you mature enough to have sex on your 18th birthday, if you weren't the day before? Meh.
I completely agree with every thing you said, and you said it very well, too. I was personally raised in a very religious home, and ran right up against that wall of ignorance.
I was given a book - not talked to, given a book - when I had my first period. I was also terrified to bring up to my mother the fact that I was even *having* my period. By the time she found out, I had already had to fake being sick at school once in order to come home and try to deal with something I knew *nothing* about, and I'd been living in horror for half a year, sneaking tampons from my mother and sister each month and praying (oh, the irony) that no one would ever know.
The general attitude at home was that sex was BAD and WRONG in any form. I had it drilled into my head to the point that I had a literal breakdown when I realized one night that I had interrupted my parents going at it. If it was evil, why were they doing it?? Not that I could ask, of course. My mother wouldn't even answer me when I asked, in complete seriousness, at age 12, how two people who weren't married could have a child. What?
From that point on, about 5th grade, I decided 'Fuck this', and got my education on my own. I trolled the public library for adult books, snuck the occasional rated-R movie or primetime TV show, and eventually, learned all I needed to know about the dynamics and logistics and normal sexuality. Oh, and fanfic factors in there, too. Good Lord, where would I *be* without the lessons learned from reading porn underage?
To this day, I think my mother still assumes that I at 20 and my sister at 24 are innocent to anything more than the biological fundamentals of sex and sexual behavior. And of course we're both untouched virgins, waiting for that ring and blessed matrimony. God knows what would have happened if we'd been rebellious types.
*sigh* I'm very lucky, really, to have come out even remotely well-adjusted. Obviously I'm an extreme case, but my children will be raised very very differently than I was.
Hear from the daily_snitch
Porn: I'm going to show you how to stick a large penis into this woman's anus.
Natt: *learns*
Anyway, I do think that the issue is a great deal more complicated than just "YUO MUST NOT HAEV TEH SEXES UNTIL 16/18/28/35/64/102" as it depends on the individual and on the situation. I wouldn't leave an eleven year old alone with a copy of some bestial/carjack porn, but I wouldn't want to be alone with that kind of thing myself, and I'm twenty-two. I think an eleven year old can stand to read about masturbation and tits. Even masturbation with tits ("sausage bap, madam?") would probably turn out okay. In short, any sort of ordinary sex probably won't hurt a young'un if they come across it in a book. Society's horror of tits on the TV is really irritating, since kids surely already know what they look like.
speeling...
My colleague and I were talking the other day about parenting and cultural choices. So much of what we think of as the "right" way to parent is cultural, including remnants of religious belief even where day to day religious practice no longer exists. Example: On the whole, American parents tend to encourage independence; whereas, parents in some of ther cultures encourage welfare of group. So I, as an American mom, probably spend some energy helping my child get his needs met, where a mother in another culture might spend that same energy convincing the child he doesn't have those needs because it's bad for the group. I'm not entirely sure why we don't include sex drive in those needs we're busy getting met, but I expect that is largely a religious remnant.
I don't censor what my kids read. I just don't. I don't go showing them porn, and if I found them reading/watching/writing/drawing erotica or porn or whatever, I'd be inclined to ask them what they thought, ask them if they found this interesting, generally work to make sure they understood about...hm. I know what I mean, but I'm saying it badly. I'd want to make sure they were clear on procreative, recreational, and inappropriate uses of their bodies. I'd want to make sure they understood that there's this whole range of things to do, and as long as everyone knows what they're doing and feels good about it, it's all fair, but if someone doesn't it's not all fair. Like, there are tons of vegetables, and some people like all of them, but many people don't like brussels sprouts, and a few really only will eat peas. Heh.
On the other hand, I do censor shoot-em-up videogames. I've told them I don't see the point of practicing feeling happy about seeing someone's head explode. Actually, this sort of does follow; I imagine the explodee rarely is a consenting party. They hate that rule, but tough shit.
As to Janet and her boob? First of all, my kids see more of me than that all the time. I can't manage to learn to wear a bathrobe. It kills me when extra kids spend the night and I have to remember to apply clothing before going across the hall to the bathroom because regardless of my standards for my kid, I don't get to impose them on someone else's. In any case, when Janet's boob appeared, happily even the always-embarrassed 11 year old was clear on this: that? is just skin. Sheesh. Moving on.
Sigh. Anyway, I mostly agree with what you say. And I don't know that I am the person who will know when my kid is "ready" for sex, no matter how hard I try. I don't know a single mother who can objectively look at her 14 or 16 or 20 year old baby and think, hey, I know, you're probably ready for fucking! I mean, be aware that they may be? Sure. Be willing to discuss it? Absolutely. But actually look at the kid and think they're ready? Not so much. If my kid comes to me at 14 and asks for condoms, I imagine I'll look at him like he's grown a third eye or something and think, christ! butbut, my BABY. But if he's asking, clearly he'll think he's ready to be, and my opnion, or forbidding, will be irrelevant to that. So, I'll make sure he's clear on keeping himself and any partners healthy. And I'll give him the advice my mother gave me: for heaven's sake, don't do it in the back seat of a car. It's uncomfortable. Go somewhere you have some room, and take your time.
I have to back your opinion 100% here. I think current societal idea that there's an 'age' where sex becomes an appropriate topic for children is asinine (there is more-or-less an appropriate age for engaging in sexual activity, but that's another issue altogether). Sex is one of the most basic human behaviors and it's downright stupid to paint it with the brush of indecency or amorality.
I don't believe that sex should be ever made out to seem dirty, wrong, or bad in any way. I also don't think that it's possible to shield your children from learning about sex; you'd have to lock them in their room 24-7 and homeschool them without television or outside intervention of any sort...and that would be plain creepy and damaging.
And the cut off age of 18? Downright insulting if you ask me. Why are we trying to hold young people back? Why does society seem to feel this push to extend childhood? It goes beyond sexual awareness; we've got college graduates who seem unable to cope. Less than 150 years ago girls were considered ready to MARRY a little beyond puberty, for goodness sake. While certainly NOT a solution for modern times, at least it was acknowledged that individuals were already sexual beings at that age.
And puberty ages are getting younger. Why are we culturally ignoring this?
~~~~Mom: It's when the boy's pee pee goes into the girl's pee pee.You had more of a sex talk than I ever did. *snerk*
If--and this really is a big if--I ever adopt kids (in that situation, I plan to adopt a kid rather than have one myself), I plan to raise them in an atmosphere that is, basically, open. I want them to be able to talk to me about whatever: grades, teenage angst, puberty, porn, anything.
I love my mom, but really, I don't feel comfortable talking with her about some things. I remember a few years ago (I'm 14 now), when puberty first began to set in, I agonized for over a week before tentatively mumbling, "Mom I want to wear a bra okay thanks."
Seriously, I don't understand why sex regarded by today's society as "wrong" and "immoral." I mean, it's what got us here in the first place. Sure, you don't want your kid screwing around with their boyfriend/girlfriend after you give them a sex talk, but think of the alternative: some nasty STDs, teen pregnancies, even HIV.
And so, assuming I'm ever going to be raising children, I will definitely keep from censoring what they read. |
/*
* Copyright 2017 Google LLC
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following disclaimer
* in the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Google LLC nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package com.google.api.gax.rpc;
import com.google.api.gax.rpc.StatusCode.Code;
import com.google.api.gax.rpc.testing.FakeStatusCode;
import com.google.common.truth.Truth;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.JUnit4;
@RunWith(JUnit4.class)
public class ApiExceptionFactoryTest {
@Test
public void cancelled() {
Truth.assertThat(createException(Code.CANCELLED)).isInstanceOf(CancelledException.class);
Truth.assertThat(createExceptionWithMessage(Code.CANCELLED))
.isInstanceOf(CancelledException.class);
}
@Test
public void notFound() {
Truth.assertThat(createException(Code.NOT_FOUND)).isInstanceOf(NotFoundException.class);
Truth.assertThat(createExceptionWithMessage(Code.NOT_FOUND))
.isInstanceOf(NotFoundException.class);
}
@Test
public void unknown() {
Truth.assertThat(createException(Code.UNKNOWN)).isInstanceOf(UnknownException.class);
Truth.assertThat(createExceptionWithMessage(Code.UNKNOWN)).isInstanceOf(UnknownException.class);
}
@Test
public void invalidArgument() {
Truth.assertThat(createException(Code.INVALID_ARGUMENT))
.isInstanceOf(InvalidArgumentException.class);
Truth.assertThat(createExceptionWithMessage(Code.INVALID_ARGUMENT))
.isInstanceOf(InvalidArgumentException.class);
}
@Test
public void deadlineExceeded() {
Truth.assertThat(createException(Code.DEADLINE_EXCEEDED))
.isInstanceOf(DeadlineExceededException.class);
Truth.assertThat(createExceptionWithMessage(Code.DEADLINE_EXCEEDED))
.isInstanceOf(DeadlineExceededException.class);
}
@Test
public void alreadyExists() {
Truth.assertThat(createException(Code.ALREADY_EXISTS))
.isInstanceOf(AlreadyExistsException.class);
Truth.assertThat(createExceptionWithMessage(Code.ALREADY_EXISTS))
.isInstanceOf(AlreadyExistsException.class);
}
@Test
public void permissionDenied() {
Truth.assertThat(createException(Code.PERMISSION_DENIED))
.isInstanceOf(PermissionDeniedException.class);
Truth.assertThat(createExceptionWithMessage(Code.PERMISSION_DENIED))
.isInstanceOf(PermissionDeniedException.class);
}
@Test
public void resourceExhausted() {
Truth.assertThat(createException(Code.RESOURCE_EXHAUSTED))
.isInstanceOf(ResourceExhaustedException.class);
Truth.assertThat(createExceptionWithMessage(Code.RESOURCE_EXHAUSTED))
.isInstanceOf(ResourceExhaustedException.class);
}
@Test
public void failedPrecondition() {
Truth.assertThat(createException(Code.FAILED_PRECONDITION))
.isInstanceOf(FailedPreconditionException.class);
Truth.assertThat(createExceptionWithMessage(Code.FAILED_PRECONDITION))
.isInstanceOf(FailedPreconditionException.class);
}
@Test
public void aborted() {
Truth.assertThat(createException(Code.ABORTED)).isInstanceOf(AbortedException.class);
Truth.assertThat(createExceptionWithMessage(Code.ABORTED)).isInstanceOf(AbortedException.class);
}
@Test
public void outOfRange() {
Truth.assertThat(createException(Code.OUT_OF_RANGE)).isInstanceOf(OutOfRangeException.class);
Truth.assertThat(createExceptionWithMessage(Code.OUT_OF_RANGE))
.isInstanceOf(OutOfRangeException.class);
}
@Test
public void internal() {
Truth.assertThat(createException(Code.INTERNAL)).isInstanceOf(InternalException.class);
Truth.assertThat(createExceptionWithMessage(Code.INTERNAL))
.isInstanceOf(InternalException.class);
}
@Test
public void unavailable() {
Truth.assertThat(createException(Code.UNAVAILABLE)).isInstanceOf(UnavailableException.class);
Truth.assertThat(createExceptionWithMessage(Code.UNAVAILABLE))
.isInstanceOf(UnavailableException.class);
}
@Test
public void dataLoss() {
Truth.assertThat(createException(Code.DATA_LOSS)).isInstanceOf(DataLossException.class);
Truth.assertThat(createExceptionWithMessage(Code.DATA_LOSS))
.isInstanceOf(DataLossException.class);
}
@Test
public void unauthenticated() {
Truth.assertThat(createException(Code.UNAUTHENTICATED))
.isInstanceOf(UnauthenticatedException.class);
Truth.assertThat(createExceptionWithMessage(Code.UNAUTHENTICATED))
.isInstanceOf(UnauthenticatedException.class);
}
@Test
public void unimplemented() {
Truth.assertThat(createException(Code.UNIMPLEMENTED))
.isInstanceOf(UnimplementedException.class);
Truth.assertThat(createExceptionWithMessage(Code.UNIMPLEMENTED))
.isInstanceOf(UnimplementedException.class);
}
@Test
public void unknown_default() {
Truth.assertThat(createException(Code.OK)).isInstanceOf(UnknownException.class);
Truth.assertThat(createExceptionWithMessage(Code.OK)).isInstanceOf(UnknownException.class);
}
private ApiException createException(StatusCode.Code statusCode) {
return ApiExceptionFactory.createException(
new RuntimeException(), FakeStatusCode.of(statusCode), false);
}
private ApiException createExceptionWithMessage(StatusCode.Code statusCode) {
return ApiExceptionFactory.createException(
"message", new RuntimeException(), FakeStatusCode.of(statusCode), false);
}
}
|
Q:
Which math books would help in learning SLAM systems?
Recently I started studying papers on SLAM systems by Durrant-Whyte for my research and I'm finding some difficulties in the math (matrices and probability) that is tackled in these papers.
Which math books/topics would you recommend me to go over before continuing with other papers?
A:
Probability and statistics. Stochastic signal processing. Estimation and Detection theory (I highly recommend that you find a class that uses Harry VanTrees's book and that offers office hours, that you enroll, and study, and that you reserve lots of time in your schedule to take it -- if you can learn that stuff by reading the book you're somewhere in the 99.9th percentile).
"Optimal State Estimation" by Dan Simon is a really good Kalman filter book, but if you find yourself just reading the words and not getting the math, then you need to put it down and go study multivariate probability for a while.
Matrix math has been mentioned -- but you can know the matrix math up the wazoo, and all it does is make it quicker to formulate the problems. Without the knowledge of probability & statistics part, the matrix math will just help you screw up quicker, and with more elan.
|
[Mapping and analysis QTL controlling some morphological traits in Chinese cabbage (Brassica campestris L. ssp. pekinensis)].
An AFLP and RAPD genetic map with 352 markers and a RIL (recombinant inbred lines) population from the cross of two cultivated Chinese cabbage lines were employed in mapping and analysis quantitative trait loci (QTL). The number, location, variation explained and additive effect of QTL underlying nine morphological traits were determined by using composite interval mapping method. Fifty putative QTL, including five for plant growth habit, six for plant height, five for plant diameter, seven for leaf length, four for leaf width, six for leaf length/leaf width ratio, seven for petiole length, four for petiole width and six for bolting character, were mapped on 14 linkage groups. There were unequal gene effects and unequal variation explained on the expression of many morphological traits. These results are fundamentals for molecular assisted selection of morphological traits in Chinese cabbage breeding. |
Hello i just installed Webshop Plus! v.3.2.
It works perfect, but when i go in the admin menu and then go to the tab "script settings" and change the labels/messages the text, then the problem is when i look in the tab "orders" and open the latest order, some of the text in the order is gone.
I first made the changes in the the tab "custom field settings" under customer page fields. I changed "firstname, First" name to "firstname, Voornaam" , and after it i test it with a test order from my website, and fill in the field "Voornaam" my name "Rob"
The receiver of the test mail and the administrator mail received the test mail correct, First name was changed to Voornaam and I see my name Rob
But when i go to the administrator pannel, log in and go to the Tab "Orders" & View order, and take a look at the latest order, then the field "First name" is gone and is not replaced to "Voornaam" and i cannot see my name "Rob". When i go to edit Order i can see that First name was changed to "Voornaam" but my name Rob is not showing up.
I'm sorry, I can't know what the exact problem is.
Can be an issue with your server file system not allowing file writing, or something else. Or, simply, the orders list file was already created, therefore you can't change the titles anymore. This would also explain why values are not saved.
Please delete the file orders_list.php from your server cart/admin folder, make all necessary changes in field names, then make a test order and check.
Thanks, this problem is solved, i deleted, "orders_list.php" and make a new order, now is every thing working.
I have another question, in the "checkout" page when chose "paypal" i hit the button "order now", i go to the "thankyou" page. Is there a possibility that when i chose "paypal" and hit the button "order now" it go direct to the paypal website?
Thanks, i understand that it isn't possible in this configuration.
But is it possible change the "papal" button to an "Ideal" button. I tried this button it works, but it not updates the cart total amount. What must i change to let it work? I am not familiar with html programming.
When it is not to much to ask, can you help me a bit, to let this "ideal" code button work with Webshop plus3 |
Toronto: In a bid to determine whether distant stars with planets orbiting them can harbour life, a global team of astronomers has discovered a new way to measure the pull of gravity at the surface of distant stars.
Knowing the surface gravity of a star is essentially knowing how much you would weigh on that star.
If stars had solid surfaces on which you could stand, then your weight would change from star to star.
The new method allows scientists to measure surface gravity with an accuracy of about four percent, for stars too distant and too faint to apply current techniques.
Since surface gravity depends on the star's mass and radius (just as your weight on Earth depends on its mass and radius), this technique will enable astronomers to better gauge the masses and sizes of distant stars.
“If you don't know the star, you don't know the planet. The size of an exoplanet is measured relative to the size of its parent star,” said study co-author and professor Jaymie Matthews from University of British Columbia.
If you find a planet around a star that you think is Sun-like but is actually a giant, you may have fooled yourself into thinking you've found a habitable Earth-sized world.
“Our technique can tell you how big and bright is the star, and if a planet around it is the right size and temperature to have water oceans, and maybe life,” Matthews added.
The new technique called the “autocorrelation function timescale technique” or timescale technique for short, uses subtle variations in the brightness of distant stars recorded by satellites like Canada's MOST and NASA's Kepler missions.
Future space satellites will hunt for planets in the 'Goldilocks Zones' of their stars. Not too hot, not too cold, but just right for liquid water oceans and maybe life. Future exoplanet surveys will need the best possible information about the stars they search, if they're to correctly characterize any planets they find.
“The timescale technique is a simple but powerful tool that can be applied to the data from these searches to help understand the nature of stars like our Sun and to help find other planets like our Earth,” explained lead author Thomas Kallinger from University of Vienna.
It will play an exciting role in the study of planets beyond the Solar System, many so distant that even the basic properties of the stars they orbit can't be measured accurately.
The new method is described in a study published in the journal Science Advances. |
Red Army Strait
Red Army Strait (, Proliv Krasnoy Army) is a strait in Severnaya Zemlya, Russia. It is named after the Red Army (Krasnaya Armiya).
Geography
The Red Army Strait is wide. It separates Komsomolets Island in the north from October Revolution Island in the south and connects the Kara Sea in the west with the Laptev Sea in the east. The Yuny Strait, separating Pioneer Island from Komsomolets Island, branches to the northwest in the eastern part of the strait.
The huge Academy of Sciences Glacier reaches the shore all along the northern side of the strait, while the smaller Rusanov Glacier flanks the eastern part of its southern shore. Cape October is located in the northern shore of October Revolution Island, facing the Red Army Strait. Visoky Island lies about east and Bolshoy Izvestnikovky Island lies about to the southwest of the cape.
References
Category:Straits of the Laptev Sea
Category:Straits of the Kara Sea
Category:Straits of Severnaya Zemlya |
Q:
Change text in textview inside ViewPager2 on button click from Fragment
I have a fragment containing viewPager2 and a few buttons... ViewPager2 contains a textView whose text I want to change on button click(present inside fragment not viewPager2).
Interface
public interface IUpdateCursor {
void cursorOn_nextWord();
void cursorOn_nextLine();
void cursorOn_nextTextBlock();
void cursorOn_previousWord();
void cursorOn_previousLine();
void cursorOn_previousTextBlock();
}
Fragment
public class Read_fragment extends Fragment implements View.OnClickListener {
private ImageButton imgbtn_back,imgbtn_left_top, imgbtn_right_top, imgbtn_left_down, imgbtn_right_down;
private Button btn_done, btn_center_top, btn_center_down;
private ViewPager2 viewPager;
com.example.vision.HighlightedTextView textView;
public Read_fragment() {
setRetainInstance(true);
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
// Inflate the layout for this fragment
View view = inflater.inflate(R.layout.fragment_read, container, false);
initViews(view);
return view;
}
@Override
public void onViewCreated(@NonNull View view, @Nullable Bundle savedInstanceState) {
super.onViewCreated(view, savedInstanceState);
viewPager.setAdapter(new ReadViewPagerAdapter(getContext()));
// displayOCRText(getAllImages(Application_TEMP_DIR,
// getFilenameFilter(new String[]{"jpeg"})));
}
private void initViews(@NonNull View view) {
imgbtn_back = view.findViewById(R.id.imgbtn_back);
imgbtn_left_top = view.findViewById(R.id.imgbtn_left_top);
imgbtn_right_top = view.findViewById(R.id.imgbtn_right_top);
imgbtn_left_down = view.findViewById(R.id.imgbtn_left_down);
imgbtn_right_down = view.findViewById(R.id.imgbtn_right_down);
btn_done = view.findViewById(R.id.btn_done);
btn_center_top = view.findViewById(R.id.btn_center_top);
btn_center_down = view.findViewById(R.id.btn_center_down);
viewPager = view.findViewById(R.id.viewPager);
textView = view.findViewById(R.id.textView5);
imgbtn_back.setOnClickListener(this);
imgbtn_left_top.setOnClickListener(this);
imgbtn_right_top.setOnClickListener(this);
imgbtn_left_down.setOnClickListener(this);
imgbtn_right_down.setOnClickListener(this);
btn_done.setOnClickListener(this);
btn_center_top.setOnClickListener(this);
btn_center_down.setOnClickListener(this);
}
@Override
public void onClick(View view) {
switch (view.getId()){
case R.id.imgbtn_back :{/*go back to preview without saving changes.*/navigateTo(R.id.action_read_fragment_to_preview_fragment);}break;
case R.id.imgbtn_left_top :{ /*previous word or alphabet.*/ imgbtn_left_top_CLICK(); }break;
case R.id.imgbtn_right_top :{ /*next word or alphabet.*/ imgbtn_right_top_CLICK(); }break;
case R.id.imgbtn_left_down :{ /*next sentence or paragraph*/ imgbtn_left_down_CLICK(view); }break;
case R.id.imgbtn_right_down :{ /*previous sentence or paragraph*/ imgbtn_right_down_CLICK(view); }break;
case R.id.btn_done :{ /*apply changes and go back to preview.*/ }break;
case R.id.btn_center_top :{ /*mark in on first click, mark out on second click, then show cut, copy paste options.*/ }break;
case R.id.btn_center_down :{ /*Read from cursor location to end of page.*/ }break;
}
}
private void navigateTo(@IdRes int actionResId){
final NavController navController = Navigation.findNavController(Objects.requireNonNull(getActivity()), R.id.container);
navController.navigate(actionResId);
}
private void imgbtn_left_top_CLICK(){
// UpdateCursor.setUpdateCursor();
// iUpdateCursor.updateCursor(false,true);
IUpdateCursor iUpdateCursor = (IUpdateCursor) this.getContext();
iUpdateCursor.cursorOn_previousWord();
}
private void imgbtn_right_top_CLICK(){
// this.iUpdateCursor = (IUpdateCursor) viewPager.getRootView();
// iUpdateCursor.updateCursor(true,true);
IUpdateCursor iUpdateCursor = (IUpdateCursor) this.getContext();
iUpdateCursor.cursorOn_nextWord();
}
private void imgbtn_left_down_CLICK(View view){
// this.iUpdateCursor = (IUpdateCursor) view;
// iUpdateCursor.updateCursor(false,false);
IUpdateCursor iUpdateCursor = (IUpdateCursor) this;
iUpdateCursor.cursorOn_previousLine();
}
private void imgbtn_right_down_CLICK(View view){
// this.iUpdateCursor = (IUpdateCursor) view;
// iUpdateCursor.updateCursor(true,false);
IUpdateCursor iUpdateCursor = (IUpdateCursor) this;
iUpdateCursor.cursorOn_nextLine();
}
}
ViewPager Adapter
public class ReadViewPagerAdapter extends RecyclerView.Adapter<ReadViewPagerAdapter.ViewHolder> {
private LayoutInflater mInflater;
private Vector<Bitmap> images;
public ReadViewPagerAdapter(Context context) {
this.mInflater = LayoutInflater.from(context);
this.images = new Vector<>(5);
this.images = getAllImages(Application_TEMP_DIR,
getFilenameFilter(new String[]{"jpeg"}));
}
@NonNull
@Override
public ReadViewPagerAdapter.ViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View view = mInflater.inflate(R.layout.list_item_view_pager_read, parent, false);
return new ReadViewPagerAdapter.ViewHolder(view);
}
@Override
public void onBindViewHolder(@NonNull ReadViewPagerAdapter.ViewHolder myHolder, int position) {
//TODO: set image in imageView.
Bitmap bitmap = images.get(position);
new OCR(myHolder).displayOCRText(bitmap);
}
@Override
public int getItemCount() {
return images.size();
}
class ViewHolder extends RecyclerView.ViewHolder implements OCRcallbacks, IUpdateCursor{
com.example.vision.HighlightedTextView textView;
FirebaseVisionText OCRText;
ViewHolder(View itemView) {
super(itemView);
textView = itemView.findViewById(R.id.textView5);
}
@Override
public void setText(FirebaseVisionText texts) {
this.OCRText = texts;
this.textView.setText(this.OCRText.getText());
this.textView.setVisionText(texts);
}
@Override
public void cursorOn_nextWord() {
this.textView.cursorOn_nextWord();
}
@Override
public void cursorOn_nextLine() {
this.textView.cursorOn_nextLine();
}
@Override
public void cursorOn_nextTextBlock() {
this.textView.cursorOn_nextTextBlock();
}
@Override
public void cursorOn_previousWord() {
this.textView.cursorOn_previousWord();
}
@Override
public void cursorOn_previousLine() {
this.textView.cursorOn_previousLine();
}
@Override
public void cursorOn_previousTextBlock() {
this.textView.cursorOn_previousTextBlock();
}
}
}
Kindly help me. And if possible please give me an insight into interface implementation as listeners and callbacks and how does interface act as references.
A:
I had similar issue.
I solve it by keeping a parentView pointer in my adapter and in adapter constructor i add a parameter named "parentView" (type: viewGroup) which set it.
So each time i need to have an access to parent button / another view i used:(and you can also add a click listener - i add an example below )
parentView.findViewById(R.id.Parent-View-Name);
in my adapter.
This is how it should work in your case:
ADAPTER
Step 1
private ViewGroup parentView; // Add new Var
// change/add new constracotr
public ReadViewPagerAdapter(Context context,ViewGroup parent) {
this.mInflater = LayoutInflater.from(context);
//******
this.parentView = parent;
//******
this.images = new Vector<>(5);
this.images = getAllImages(Application_TEMP_DIR,
getFilenameFilter(new String[]{"jpeg"}));
}
Step 2
When ever you would like to find your button just use "parentView.findViewById"
(ImageButton)parentView.findViewById(R.id.fragment_button).setOnClickListener(v1 -> {// Put Here what ever you want to happen when click happen}
I hope i help :) let me know
|
The patent documents describe a simple workflow for delivery via drone parachute. The images show the steps, which include "Pack item, attach parachute shipping label," "Attach to UAV" and then "Drop at delivery location." The patent's abstract says that the system can include a self-adhesive backing, a "plurality of parachute cords" and a breakaway cover to land a package at a delivery spot without damaging it. The label may also include graphics and text to include an address, velocity and spin information on the package. The system may also include a harness to prevent cord tangling from any spinning during an item's descent.
Whether the parachute shipping label will ever grace a box full of gear from Amazon is anyone's guess, but this is certainly an interesting way to slow packages before they hit the ground. The company wants to keep its current delivery workflow process in place, writing in the patent that it wants any new aerial delivery system to be "preferably compact, self-contained and relatively inexpensive." That bodes well for those of us who can't wait to get their sundries from the sky. |
1. Introduction {#sec1-sensors-16-00992}
===============
Over the last few years, studies have exposed the limitations of traditional wireless sensor networks (WSNs) \[[@B1-sensors-16-00992],[@B2-sensors-16-00992],[@B3-sensors-16-00992]\], including sensor system management and the sensing data usage model. Recently, the sensor-cloud architecture has been proposed and has been receiving great interest \[[@B1-sensors-16-00992],[@B4-sensors-16-00992],[@B5-sensors-16-00992],[@B6-sensors-16-00992]\] among researchers in the field of WSNs, as well as cloud computing.
The integration between WSNs and the cloud is motivated by taking advantage of the powerful processing and storage abilities of cloud computing for sensing data. By enabling such an integration, sensor-cloud sensing-as-a-service (SSaaS) can provide sensing data to multiple applications at the same time, instead of the current sensing data usage model for a dedicated application. The sensor-cloud can help improve the utilization of sensor resources, as well as sensor management and functions as an interface between physical sensor networks and the cyber world \[[@B1-sensors-16-00992],[@B7-sensors-16-00992]\]. For those reasons, the sensor-cloud is being viewed as a potential substitute of traditional WSNs \[[@B1-sensors-16-00992],[@B7-sensors-16-00992]\].
In the sensor-cloud architecture \[[@B1-sensors-16-00992],[@B4-sensors-16-00992],[@B5-sensors-16-00992]\], the application model is changed as follows: Physical sensors perform sensing and forward sensing data to the sensor-cloud.The sensor-cloud virtualizes sensor nodes as virtual sensors and provides sensing-as-a-service to users and applications.Applications/users buy sensing services on demand from the sensor-cloud.
One of main targets of the sensor-cloud is to enable a single physical sensor network to provide sensing services to multiple applications at the same time and enable users/applications to request sensing services on demand based on their needs (i.e., by allowing applications to specify their region of interest and sensing frequency \[[@B1-sensors-16-00992],[@B4-sensors-16-00992],[@B5-sensors-16-00992]\]). Although a number of studies have been initially investigated, there is still a lack of a specific and efficient scheme on processing the down-stream traffic of application requests from cloud-to-sensors (C2S), in other words on how the sensor-cloud should process application requests and interact with physical sensors to efficiently support multiple applications on demand, toward the above targets.
Without optimizing application requests as in the current sensor-cloud, only applications having the same requirements (i.e., sensing interval, latency) can reuse the sensing data of each other. Therefore, any dedicated application request with a new requirement will be sent to physical nodes. To support multiple applications with different requirements, a physical sensor may have to run multiple schedules (i.e., multiple dedicated sensing intervals) for different applications. In several existing studies, dedicated sensing requests \[[@B8-sensors-16-00992],[@B9-sensors-16-00992]\] are used. This approach is obviously inefficient for constrained resource devices like sensors, as they have to process a great number of requests from different applications with possibly high data redundancy. Some other sensor-cloud designs provide only limited sensing services within a fixed sensing rate \[[@B7-sensors-16-00992],[@B10-sensors-16-00992]\], which may not satisfy all applications.
Allowing on-demand sensing services on the sensor-cloud poses enormous technical challenges. For example, on-demand sensing services may lead to dynamic operations at physical sensor nodes. In particular, when a new sensing request with a new sensing requirement is sent to physical WSNs, physical nodes have to add or update their sensing tasks to meet the requirements of all applications. On-demand sensing requests from applications may overuse physical sensor networks if request optimization is not considered properly, thus incurring high operational cost for sensors' owners. In addition, how physical sensors can perform sensing tasks efficiently to satisfy all applications at the same time is also an open question. For those reasons, an efficient interactive model requires the involvement of both the sensor side and the cloud side.
To address the above challenges, we identify the following requirements for the integration between WSNs and the sensor-cloud: (1) application requests' optimization at the sensor-cloud is required to minimize the number of requests sent to physical WSNs; (2) physical sensors should have the ability to cope with dynamic conditions and should interact with the sensor-cloud to automatically adapt (i.e., its scheduling, etc.) to network condition changes to optimize their operations.
In this paper, we first propose an efficient interactive model for applications, the sensor-cloud and physical sensor networks to support on-demand periodic sensing services for multiple applications. In the model, the sensor-cloud processes applications' requests and performs request aggregation to minimize the number of requests sent to physical sensors. Upon receiving requests, physical sensors update their sensing service to meet the requirements of all applications and automatically adapt their operations upon the changes to minimize energy consumption.
For illustration, we describe the interactive model using sensing requests with different sensing interval requirements as an example, but the model can be generalized for any application requirement, such as packet latency, reliability, etc. In particular, for sensing requests with different sensing interval requirements, the sensor-cloud aggregates all applications' requests and selects an optimal consolidated sensing interval for a set of sensors, which minimizes the number of sensing and the number of data packet transmissions of physical sensor nodes while satisfying all applications' sensing interval requirements.
Whenever a new consolidated sensing interval is found by the aggregator on the sensor-cloud, the aggregator sends a sensing update request to a physical sensor manager, which will forward the request to corresponding physical sensors. The physical sensors receive the request and update their sensing interval accordingly to meet the requirements of all applications. Because the sensing update request implicitly notifies about changes in the network traffic condition, the physical sensor nodes automatically adapt their scheduling parameters to optimize energy consumption. Requests from new applications, which the aggregator determines that the current consolidated sensing interval still satisfies, are hidden from physical sensors to save energy. As a result, data reusability is transparent to the sensor-cloud users, and sensing redundancy is reduced.
We conduct extensive analysis and experiments to evaluate the performance of the proposed model in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability. We compare the performance of the proposed system with (1) a dedicated application request model with a traditional in-network aggregation mechanism \[[@B11-sensors-16-00992]\] and (2) a dedicated application request model with a multi-task optimization scheme \[[@B12-sensors-16-00992]\] . Results show that the proposed on-cloud request aggregation is significantly more efficient than the traditional in-network aggregation and multi-task optimization, so it can be a promising approach to complement the in-network aggregation to save network resources. The proposed interactive model helps reduce the cost for both sensor owners (i.e., energy consumption of physical sensors) and sensor-cloud providers (i.e., bandwidth consumption). In addition, by achieving a high scalability, the model enables a single physical sensor network to efficiently provide sensing services to a great number of applications with low cost, which benefits cloud providers and sensor owners. As a result, the model potentially reduces the price of sensing services per application, which benefits application owners and users, thus enabling a win-win model in the sensor-cloud.
In summary, this paper makes the following contributions.
- We propose an efficient interactive model for the sensor-cloud, which enables the sensor-cloud to provide on-demand sensing services to multiple applications at the same time.
- We design an efficient request aggregation scheme on the sensor-cloud to minimize the number of requests sent to physical sensor nodes and an efficient request-based adaptive low power listening protocol for physical sensor nodes to optimize sensors' energy consumption.
- Through our comprehensive experimental studies, we show that the proposed system achieves a significant improvement in terms of the energy consumption of sensor nodes, the bandwidth consumption of sensing traffic, the packet delivery latency, reliability and scalability, compared to the state-of-the-art approaches.
The rest of this paper is organized as follows. [Section 1](#sec1-sensors-16-00992){ref-type="sec"} discusses related works. [Section 2](#sec2-sensors-16-00992){ref-type="sec"} presents the proposed interactive model. [Section 3](#sec3-sensors-16-00992){ref-type="sec"} describes the request-based adaptive protocol. [Section 4](#sec4-sensors-16-00992){ref-type="sec"} gives details about the system implementation and shows the evaluation results. Finally, in [Section 5](#sec5-sensors-16-00992){ref-type="sec"}, we discuss the limitations of the current work, possible extensions in this topic for future works and conclude the paper.
2. Related Work {#sec2-sensors-16-00992}
===============
Recently, the sensor-cloud has been proposed as a promising architecture, which has been receiving great interest \[[@B1-sensors-16-00992],[@B5-sensors-16-00992],[@B6-sensors-16-00992]\] among researchers. Although there are several main architecture designs of the sensor-cloud that have been proposed \[[@B1-sensors-16-00992],[@B5-sensors-16-00992],[@B6-sensors-16-00992]\], their basic design is quite similar. In particular, physical sensors perform sensing and forward sensing data to the sensor-cloud. The sensor-cloud provides sensing services to multiple users/applications \[[@B13-sensors-16-00992]\] through virtual sensors. The sensor-cloud virtualizes physical sensor nodes into virtual sensors. A virtual sensor is an emulation of a physical sensor on the sensor-cloud. The sensor-cloud uses virtual sensors to provide sensing services to users/applications with a customized view. Users/applications buy sensing services on demand from the sensor-cloud. Virtual sensors obtain data from underlying physical sensors and contain metadata about the corresponding physical sensors, as well as applications currently holding those virtual sensors.
A number of initial research works have been conducting toward a more detailed design for the sensor-cloud using different approaches. We categorize related works based on their approach.
The sensor-cloud obviously provides many opportunities to develop sensing services \[[@B14-sensors-16-00992],[@B15-sensors-16-00992],[@B16-sensors-16-00992],[@B17-sensors-16-00992]\]. In \[[@B14-sensors-16-00992]\], Dinh and Younghan Kim propose to exploit the sensor-cloud for smart cities. In particular, a location-based sensor-cloud model is designed to support government officers on managing parking violation efficiently. In \[[@B15-sensors-16-00992]\], Giovanni et al. also propose a framework, namely Stack4Things, for smart city applications. However, in Stack4Things, a device-oriented approach is used with fog computing, instead of the location-centric approach as used in \[[@B14-sensors-16-00992]\]. Giancarlo et al. \[[@B16-sensors-16-00992],[@B17-sensors-16-00992]\] propose to integrate the cloud computing platform with a body sensor network (BSN), where a multi-tier application-level architecture is used to allow a rapid development of BSN applications. In fact, there are still many challenges \[[@B1-sensors-16-00992],[@B5-sensors-16-00992],[@B18-sensors-16-00992]\] that need to be investigated for an efficient integration between WSNs and the cloud. For different application fields, such as healthcare \[[@B19-sensors-16-00992]\], some unique requirements and challenges may exist. Enabling efficient on-demand sensing-as-a-service is one of the main challenges that need to be taken into account for WSNs and cloud integration.
Data processing is an important aspect of the sensor-cloud. In \[[@B20-sensors-16-00992],[@B21-sensors-16-00992]\], the authors investigate on upstream data processing optimization \[[@B22-sensors-16-00992],[@B23-sensors-16-00992]\] in the sensor-cloud using different techniques, including compression, filtering, encryption, decryption, etc. Jun et al. \[[@B24-sensors-16-00992]\] propose a queuing model and an efficient congestion control, namely random early detection-based (RED-based), mechanism for upstream sensing data to improve data transmission from sensors to the cloud. In \[[@B25-sensors-16-00992]\], Samer et al. propose a data prediction model to improve data transmission and data processing in the sensor-cloud. In particular, the model is built within sensors and run by the sensor-cloud to generate data so that a large amount of data transmission at the sensor nodes is reduced to save energy. Barbaran et al. \[[@B26-sensors-16-00992]\] use virtual channels to simplify the integration of WSNs in the cloud. Virtual channels are used to exchange messages between every single device and the cloud to achieve highly reconfigurable and self-managed features.
In the sensor-cloud, traditional data collection schemes can also be extended. In particular, a sensor node may have multiple options for gateways to forward its data toward the cloud instead of a dedicated sink as in traditional WSNs. Chatterjee et al. \[[@B7-sensors-16-00992]\] and Misra \[[@B9-sensors-16-00992]\] improve the integration between the sensor-cloud and physical nodes by proposing schemes to select optimal gateways for sensors to forward sensing data to the cloud. The studies demonstrate that selecting good gateways can significantly reduce sensing data forwarding overhead toward the sensor-cloud.
As sensors are normally deployed with a high density, how the sensor-cloud selects a number of physical sensor nodes based on virtual sensors to execute a task is an interesting topic. The QoS-aware sensor allocation mechanism \[[@B27-sensors-16-00992]\] enables the sensor-cloud to allocate an optimal set of sensors for a particular task with the awareness of quality of service. Having a similar target, however, Sen et al. \[[@B28-sensors-16-00992]\] implement a collaborative platform for sensor allocation based on the sensors' coverage. Zhu et al. \[[@B29-sensors-16-00992]\] believe that some nodes may perform a task better than others and propose to use trust for sensors and data center selection. In particular, sensors and data centers with a high trust value are selected to guarantee quality of services.
In \[[@B30-sensors-16-00992],[@B31-sensors-16-00992],[@B32-sensors-16-00992]\], various caching techniques are proposed to conserve the network resources of sensor nodes when they are integrated with the sensor-cloud. The caching mechanisms are normally supported by the cloud and deployed at gateways. In the studies, the caching mechanisms are designed in a flexible way for various rates of changes of the physical environments. By interacting with sensor nodes, the cloud can be used to estimate parameters for sensor nodes \[[@B33-sensors-16-00992]\] to improve their performance.
Under an assumption that a number of distributed data centers can be built to provide sensing services, Chatterjee et al. \[[@B34-sensors-16-00992]\] optimize sensing service transmission and sensing management by de-compositing sensing data to the closest cloud data center and scheduling a particular data center to congregate data from virtual sensors. In \[[@B8-sensors-16-00992],[@B35-sensors-16-00992]\], the authors use a reversed approach where the cloud is used as a controller to control the sleep schedule of sensors based on the location of users.
A sensor-cloud is actually a cloud of virtual sensors, which are mapped with physical nodes, so an efficient management scheme for virtual sensors is required. Ojha et al. \[[@B35-sensors-16-00992],[@B36-sensors-16-00992]\] propose an efficient virtualization scheme for physical sensor nodes and seek for an optimal location-based composition of virtual sensors. The scheme consists of two parts (1) for sensors within the same geographic region (CoV-I) and (2) spanning across multiple regions (CoV-II).
In \[[@B37-sensors-16-00992],[@B38-sensors-16-00992],[@B39-sensors-16-00992],[@B40-sensors-16-00992],[@B41-sensors-16-00992],[@B42-sensors-16-00992],[@B43-sensors-16-00992]\], different pricing models and usage models for the sensor-cloud are proposed. All pricing models have a similar approach, which is that the price of a sensing service is normally proportional to the sensing quality (i.e., sensing frequency or interval between two consecutive sensings). The studies show that the sensor-cloud approach has the potential for various types of applications from healthcare to smart cities.
Although one of the main targets of the sensor-cloud is to enable SSaaS on demand, there is still a lack of a specific and efficient interactive model toward the above targets. This paper investigates an efficient interactive model between the sensor-cloud and sensor nodes to fill the gap.
3. The Proposed Interactive Model {#sec3-sensors-16-00992}
=================================
In this section, we present our proposed interactive model, which efficiently supports on-demand sensing services to multiple applications at the same time. The model enables the sensor-cloud to provide on-demand sensing services where applications can send sensing requests with their own choice of sensing parameters based on the applications' needs and prices. The sensor-cloud processes applications' requests and performs request aggregation to minimize the number of requests sent to physical sensors while ensuring that their sensing services meet the requirements of all applications. To make the model easy to understand, we present the model using sensing requests with different sensing interval requirements as an example. However, the model can be generalized for any application requirement, such as packet latency, reliability, etc. A list of symbols used in the model is given in [Table 1](#sensors-16-00992-t001){ref-type="table"}.
3.1. Sensor-Cloud Modeling {#sec3dot1-sensors-16-00992}
--------------------------
We first present the sensor-cloud model for physical WSNs and cloud integration. The basic architecture of the sensor-cloud \[[@B1-sensors-16-00992],[@B4-sensors-16-00992],[@B5-sensors-16-00992]\] is illustrated in [Figure 1](#sensors-16-00992-f001){ref-type="fig"} and is modeled as follows.
**Physical Wireless Sensor Networks.** A wireless sensor network consists of physical sensor nodes. Each sensor node is characterized by the following properties: ID, type $i_{\tau}$, state *ς*, sensing interval $I_{se}$, ownership *O* and a set of scheduling parameters *S*.
*Each physical sensor node i is associated with a sensor type* $i_{\tau}$*, with* $i_{\tau} \in \tau = {\{\tau_{1},\tau_{2},...,\tau_{N}\}}$*, where τ is a set of N registered sensor types of the sensor-cloud.*
*During the lifetime, a sensor node may be in the active state (denoted by one) or inactive (denoted by zero). The state of a node i is denoted by* $i_{\varsigma}$.
*Each sensor node belongs to an owner θ who contributes sensing services to the sensor-cloud. Note that there may be a set of multiple WSN owners* Θ *within a sensor-cloud.*
*Each sensor node i performs sensing in every interval of* $i_{I_{se}}$ *and transmits sensing data to the cloud. Note that in conventional approaches, a sensor may perform sensing at a fixed rate or at multiple rates for different applications. In our interactive model, the sensors' sensing rate is determined by the cloud and dynamically changed upon requests from the cloud.*
A physical sensor node operates with a set of scheduling parameters S, which determine how long a node should sleep and how long it should remain awake every cycle. In our interactive model, based on the interactions with the sensor-cloud, physical sensor nodes optimize S to minimize energy consumption while satisfying all applications.
By our definition, a physical sensor *i* is modeled as follows.
$i = {(i_{ID},i_{\tau},i_{\varsigma},i_{\theta},i_{I_{se}},i_{S})},i_{\tau} \in \tau,i_{\theta} \in \Theta$.
**Cloud C.** The sensor-cloud virtualizes physical sensors, maps them into virtual sensors and provides sensing-as-a-service to users/applications \[[@B1-sensors-16-00992]\]. In other words, the sensor-cloud is composed of virtual sensors built on top of physical sensors. A cloud *c* is characterized by the following properties: ID, resources, QoS and price options. The cloud *c* may provide sensing services for a set of *τ* sensor types from Θ WSN owners. Based on a pricing model for the sensor-cloud \[[@B4-sensors-16-00992]\], the price of a sensing service is normally proportional to the sensing quality (i.e., sensing frequency or the interval between two consecutive sensing). For example, a user has to pay a higher price if he or she requests a higher sensing frequency (i.e., shorter sensing interval request). The reason is that for a higher sensing frequency, more resources are required in the physical sensor networks and cloud infrastructure. Note that this paper does not consider a selective model for clouds, so we do not present the properties of a cloud in detail.
A virtual sensor is an emulation of a physical sensor and provides a customized view to users for sensing data distribution transparently \[[@B1-sensors-16-00992]\]. In fact, virtual sensors are implemented as software images of the corresponding physical sensors on the cloud. Virtual sensors contain metadata about the corresponding physical sensors for mapping purposes and applications holding the virtual sensors \[[@B1-sensors-16-00992]\].
**Application.** An application *α* is characterized by the following properties: ID, a set of sensor data types of interest, region of interest and QoS requirements (i.e., sensing interval).
*An application α may be interested in a set of sensor data types* $\alpha_{SI}$ *for its operations. As the target of the sensor-cloud is to enable applications to be transparent regarding the types of sensors used \[[@B1-sensors-16-00992],[@B4-sensors-16-00992],[@B5-sensors-16-00992]\], we define only a set of sensor data types of interest for applications. We later provide a function to map* $\alpha_{SI}$ *of an application α to a set of sensor types τ.*
*An application α is normally deployed to work in a limited region, called a region of interest* $\alpha_{RI} = {({L_{1},L_{2},L_{3},L_{4}})}$*. The region of interest consists of the location of four points that bound the region. The sensor-cloud should manage the locations of physical sensor nodes and map them to virtual sensors on the cloud and the regions of interest of applications \[[@B1-sensors-16-00992]\].*
*Each application α may request different QoS requirements* $\alpha_{QoS}$ *for sensing data, such as delay or sensing frequency, namely dedicated sensing requests. The sensor-cloud is designed to provide sensing services to multiple applications, instead of a dedicated application, as in the traditional WSNs. In this work, we use sensing frequency (i.e., sensing interval) requirements to illustrate the model. A sensing interval requested by a particular application is called a dedicated sensing interval.*
An application *α* is modeled as follows.
$\alpha = (\alpha_{ID},\alpha_{SI},\alpha_{RI},\alpha_{QoS})$
*A dedicated sensing interval* $I_{se}$ *is the sensing frequency requested by a specific application.*
How the sensor-cloud can efficiently handle dedicated sensing interval requests from different applications and how physical sensors can schedule their operations efficiently while satisfying all applications' requests on demand are critical questions that need to be solved. In the next section, we describe an efficient interactive model in detail to address those questions.
3.2. An Efficient Interactive Model for the Sensor-Cloud (C2S) {#sec3dot2-sensors-16-00992}
--------------------------------------------------------------
[Figure 2](#sensors-16-00992-f002){ref-type="fig"} shows our proposed interactive model for the sensor-cloud. In this version, the model focuses on the down-stream traffic of application requests from cloud-to-sensors (C2S). The up-stream traffic of sensing data packets from sensors-to-cloud (S2C) will be investigated in our future work.
In the C2S model, the sensor-cloud plays the role as a the middleware between applications and physical sensors. In WSNs' deployment phase, deployed physical sensors are registered with the sensor-cloud. The physical sensors are then mapped into virtual sensors on the sensor-cloud \[[@B35-sensors-16-00992]\]. Virtual sensors are managed by the virtual sensor manager (VSM), as shown in [Figure 2](#sensors-16-00992-f002){ref-type="fig"}.
We model a function that maps a physical sensor or a set of physical sensors *ζ* to a virtual sensor or a set of virtual sensors *γ* as follows. $$f_{phy - > vir}{(\zeta)} = \gamma$$
Note that a mapping mechanism \[[@B36-sensors-16-00992]\] is out of the scope of this paper.
Sensing data collected from physical sensors are stored at virtual sensors and distributed by the sensor-cloud. The sensor-cloud then provides sensing-as-a-service (SSaaS) to different applications based on virtual sensors \[[@B1-sensors-16-00992],[@B4-sensors-16-00992],[@B5-sensors-16-00992]\].
This paper investigates the processes of the sensor-cloud when it receives application requests and how it interacts with physical nodes. A virtual sensor and its corresponding physical sensor may serve multiple applications at the same time. Therefore, we propose an efficient interactive model to minimize the number of application requests sent to physical nodes while the requirements of all applications are satisfied.
The interactive model is described as follows.
1\. A buyer (i.e., application owner) buys a sensing service of the sensor-cloud for a new application *α*. The application sends a request to the SSaaS of the sensor-cloud for a sensing service. Based on the application's demand and budget \[[@B4-sensors-16-00992]\], the application specifies the following parameters: (1) a set of sensor data types of interest $\alpha_{SI}$; (2) a region of interest $\alpha_{RI}$; and (3) requirements (i.e., sensing interval (i.e., $I_{se}$)) $I_{se}^{\alpha}$. A request including those parameters is sent to the SSaaS.
2\. First, the SSaaS needs to map the sensor data types of interest (SI) of the application $\alpha_{SI}$ to a set of actual sensor types (ST) $\tau_{\alpha}^{*} \subset \tau$. The mapping function is modeled as follows. $$f_{SI - > ST}{(\alpha_{SI})} = \tau_{\alpha}^{*} = {(\tau_{j}:\tau_{j} \in \tau)}$$
Based on the application's actual sensor types of interest and region of interest, the sensor-cloud requests the VSM to allocate a set of virtual sensors $\gamma_{\alpha}^{*}$ to provide sensing services for the application. The allocation function is modeled as follows. $$f_{viralloc}{(\alpha_{RI},\tau_{\alpha}^{*})} = \gamma_{\alpha}^{*} = {(\gamma_{j}:\gamma_{j - > type} \in \tau_{\alpha}^{*})}~and~\gamma_{j - > location} \in \alpha_{RI}$$
3\. The request is then forwarded to the request aggregator for optimization.
4\. The aggregator processes the request and determines whether or not updates in corresponding physical sensors are required (i.e., changing the sensing rate at physical nodes) to satisfy the requirements of all applications, including the new one. The aggregator determines that an update is required or not based on the requests of applications and the current configuration information of the virtual sensors managed by the VSM. If an update is required, the aggregator determines a new consolidated sensing interval $I_{se}^{c - new}$ for the corresponding physical nodes.
5\. If an update is required, the aggregator sends a sensing update request containing the new consolidated sensing interval $I_{se}^{c - new}$ (i.e., $sensing_{u}pdate_{r}equest{(I_{se}^{c - new})}$) to the physical sensor manager (PSM).
6\. The PSM reversely maps the set of virtual sensors $\gamma_{\alpha}^{*}$ to a set of corresponding physical sensors *ζ*. We model the reserved mapping function as follows. $$f_{vir - > phy}{(\gamma)} = \zeta = f_{phy - > vir}^{- 1}{(\zeta)}$$
7\. The PSM then forwards the sensing update request to the corresponding physical sensor nodes. The detailed design of the protocol for PSM sending requests to physical sensors will be studied in future work.
8\. Upon receiving the sensing update request, the physical nodes update their sensing interval to meet the requirements of the new application and all existing applications. The physical nodes then optimize their scheduling parameters to minimize energy consumption.
An aggregation mechanism for the aggregator to aggregate application requests is proposed in the next section.
3.3. Application Request Aggregation Scheme {#sec3dot3-sensors-16-00992}
-------------------------------------------
Without optimizing application requests as in the current sensor-cloud, only applications with the same requirements (i.e., same sensing interval) can share sensing data. As a result, any dedicated application request with a different requirement will be forwarded to sensor nodes. To provide SSaaS to multiple applications with different requirements simultaneously, a sensor node may have to operate with multiple schedules. This results in a high resource consumption at the physical sensor and a high bandwidth consumption, as well as storage usage at the sensor-cloud, while the sensing data may be highly redundant. As constrained resource sensor nodes are required to run multiple tasks at the same time, the scalability of the system is limited. For that reason, the sensing service cost per an application may be not competitive.
In practice, applications with different sensing interval options can still share the same sensing dataset in many cases as long as the sensing frequency (i.e., sensing interval) of the dataset satisfies all applications' requirements. For example: (1) applications with a packet delivery reliability requirement of $90\%$ normally accept sensing data with a higher reliability of $98\%$; (2) applications normally accept more frequent sensing (i.e., sensing interval of 5 s) than their requests (i.e., sensing interval of 10 s); more frequent sensing is only avoided for efficiency reasons \[[@B44-sensors-16-00992]\]. For that reason, there may exist a single consolidated sensing parameter for sensor nodes to satisfy all applications' requirements, thus enabling constrained sensor nodes to run a single sensing task while serving multiple applications at the same time.
The request aggregator proposed in the interactive model has a role to determine whether a sensing parameter satisfies a set of applications or not. If a single sensing parameter satisfies a set of applications, the sensor-cloud can use the same sensing dataset with that sensing parameter to distribute to all applications appropriately (i.e., all applications may receive the same set of sensing data or some applications may only need to receive a part of the data set, depending on the needs of the applications and the distribution policies of the sensor-cloud).
### 3.3.1. The Request Aggregator {#sec3dot3dot1-sensors-16-00992}
When the aggregator receives a new request from a new application for the SSaaS for a set of virtual sensors, it first queries the information of the virtual sensors from the VSM and classifies the virtual sensors used by a set of applications, including the new application.
The virtual sensor manager manages all information of the virtual sensors, including metadata, applications that are using the sensing services provided by the virtual sensors and their current consolidated sensing interval $I_{se}^{c}$.
We assume a set of virtual sensors *γ* that are used by a set of applications *A*, including the new application. Their current consolidated sensing interval is $I_{se}^{c}$. Each application $\alpha \in A$ requests a dedicated sensing interval of $I_{se}^{\alpha}$. The new application $\alpha_{new}$ requests a dedicated sensing interval $I_{se}^{\alpha - new}$. The aggregator now aggregates requests from all applications in the set *A* to determine an optimal sensing interval for the sensors, which satisfies the requirements of all applications. The application request aggregation procedure at the aggregator is presented in Algorithm 1.
Algorithm 1
Application request aggregation procedure.
INPUT:
γ
,
A
,
I
s
e
c
,
α
n
e
w
OUTPUT:
updating-flag, new
I
s
e
c
if updating-flag = 1
Initialize:
updating-flag = 0,
x
←
∞
Repeat
x = aggregation(
A
,
α
n
e
w
)
if
i
s
N
e
w
(
x
,
I
s
e
c
)
then
I
s
e
c
=
x
;
updating-flag = 1
return
updating-flag;
end
if
return
updating-flag;
UNTIL
there is no new application request
The aggregator aggregates sensing interval requests of all applications together and determines a consolidated sensing interval. If the returned updating-flag is one, this means that the aggregator finds a new consolidated sensing interval for all applications, including the new application. As a result, the aggregator creates a sensing update request with the new consolidated sensing interval and sends the request to the PSM, which is then sent to corresponding physical sensor nodes. If the return updating-flag is zero, this means that the current consolidated sensing interval satisfies all applications, including the new one. As a result, the new application request is hidden from the physical sensors, and no sensing update is required for physical sensors to serve the new application.
### 3.3.2. The Aggregation Function {#sec3dot3dot2-sensors-16-00992}
The design of the aggregation function, as used in the procedure above, depends on the objectives of the sensor-cloud and the request parameters (i.e., sensing interval, latency, reliability) of applications. We here define an aggregation function $f_{agg}$ for a sensing interval parameter as follows. The objective is to find a consolidated sensing interval for a set of physical sensor nodes that minimizes the number of sensing samplings and the number of packet transmissions of the physical nodes while the sensing interval requirements of all applications are still satisfied. According to the observation in \[[@B44-sensors-16-00992]\] as mentioned above, we have the following definition.
*A sensor node satisfies a sensing request of an application α if the sensor performs sensing and sensing data transmission at least every* $I_{se}^{\alpha}$ *seconds, where* $I_{se}^{\alpha}$ *is the dedicated sensing interval requested by α.*
From Definition 11, we have the lemma as follows.
*A sensor node with its actual sensing interval* $I_{se}$ *satisfies an application α with a requested sensing interval of* $I_{se}^{\alpha}$ *if and only if* $I_{se} \leq I_{se}^{\alpha}$.
For $I_{se} \leq I_{se}^{\alpha}$, it is clear that the node satisfies the sensing request of *α*, proved using Definition 11.
If $I_{se} > I_{se}^{\alpha}$, the node performs sensing every $I_{se}$ second, which is longer than the application requirement of $I_{se}^{\alpha}$. According to Definition 11, the node does not satisfy the requirement of *α*.
Given a set of N dedicated sensing intervals $I_{se}^{A} = {(I_{se}^{\alpha_{1}},I_{se}^{\alpha_{2}},...,I_{se}^{\alpha_{N}})}$ requested by a set of N applications $A = (\alpha_{1},\alpha_{2},...,\alpha_{N})$ for a set of virtual sensors $\gamma^{*}$, the purpose of the aggregation function is to find a consolidated sensing interval $I_{se}^{c}$ as follows.
*aggregation(*$I_{se}^{A}$) --\> $I_{se}^{c}$
so that:
$I_{se}^{c}$ *satisfies* $\alpha_{i}\forall\alpha_{i} \in A$.
We denote $I_{se}^{min} = \min{(I_{se}^{\alpha_{1}},I_{se}^{\alpha_{2}},...,I_{se}^{\alpha_{N}})}$ as the minimum sensing interval among dedicated sensing intervals of the applications in A, and $I_{se}^{min}$ is the dedicated sensing interval of an application $\alpha_{m} \in A$.
*A consolidated sensing interval* $I_{se}^{c}$ *of* $\gamma^{*}$ *satisfies all applications in A if and only if* $I_{se}^{c} \leq I_{se}^{min}$.
The consolidated sensing interval $I_{se}^{c}$ is the actual sensing interval of the sensor nodes $\in \mspace{600mu}\gamma^{*}$.
For $I_{se}^{c} \leq I_{se}^{min}$, $\forall\alpha_{i} \in A,I_{se}^{c} \leq I_{se}^{\alpha_{i}}$. According to Lemma 1, the sensor nodes perform sensing with an interval of $I_{se}^{c}$ satisfying the application $\alpha_{i},\forall\alpha_{i} \in A$.
For $I_{se}^{c} > I_{se}^{min}$, $\exists\alpha_{m} \in A:I_{se}^{c} > I_{se}^{\alpha_{m}}$. According to Lemma 1, the sensor nodes performing sensing with an interval of $I_{se}^{c}$ do not satisfy at least one application $\alpha_{m} \in A$. As a result, $I_{se}^{c}$ does not satisfy all applications in A.
The final objective of the aggregation function is to find a consolidated sensing interval $I_{se}^{c}$ that helps minimize the number of sensing $N_{s}$ and the number of data transmissions $N_{packet}$ for physical nodes corresponding to $\gamma^{*}$ in a time period *T*, while satisfying all applications in A. For simplification, we assume that after performing a sensing task, a sensor node creates and transmits a sensing data packet toward the sensor-cloud. Because *T* is a constant, we simplify the above problem by finding the maximum value of $I_{se}^{c}$.
*In a time period T, the number of sensings* $N_{s}$ *of a sensor node is* $N_{s} = T/I_{se}$*, where* $I_{se}$ *is the actual sensing interval of the node. We then have the number of data packet transmissions of the node generated by itself to be* $N_{packet} = N_{s}$.
Based on Lemma 1 and Lemma 2, the solution is found using the following theorem.
*The consolidated sensing interval* $I_{se}^{c}$ *that minimizes the number of sensing* $N_{s}$ *and the number of data transmissions* $N_{packet}$ *in any time period T for physical nodes corresponding to* $\gamma^{*}$ *while satisfying all applications in A is equal to* $I_{se}^{min}$.
If every node in $\gamma^{*}$ operates with the same $I_{se}^{c} = I_{se}^{min}$, the number of sensing and data packet transmissions of each node is: $$N_{s}^{c} = N_{packet}^{c} = T/I_{se}^{c} = T/I_{se}^{min}$$
If a node maintains multiple sensing schedules for different applications in A based on their dedicated sensing intervals \[[@B12-sensors-16-00992],[@B41-sensors-16-00992]\], we calculate the number of sensings $N_{s}^{d}$ and the number of data packet transmissions $N_{packet}^{d}$ for each node as follows. We denote $N_{s}^{\alpha_{i}}$ and $N_{packet}^{\alpha_{i}}$ as the number of sensings and the number of data packet transmissions of the node for an application $\alpha_{i} \in A$, respectively. $$N_{s}^{d} = N_{packet}^{d} = \sum\limits_{i = 1}^{N}{(N_{s}^{\alpha_{i}})} = \sum\limits_{i = 1}^{N}{(T/I_{se}^{\alpha_{i}})} = N_{s}^{\alpha_{m}} + \sum\limits_{i}^{{A\backslash}\alpha_{m}}{(N_{s}^{\alpha_{i}})}$$
For the application $\alpha_{m}$ that has a sensing interval equal to $I_{se}^{min}$, we have: $$N_{s}^{\alpha_{m}} = N_{packet}^{\alpha_{m}} = T/I_{se}^{min}$$
In addition, we have: $$\sum\limits_{i}^{{A\backslash}\alpha_{m}}{(N_{s}^{\alpha_{i}})} \geq 0$$
From Equations (7) and (8), we have: $$N_{s}^{d} = N_{packet}^{d} \geq T/I_{se}^{min}$$
Compare Equations (5) and (9); we conclude that a node that maintains multiple sensing schedules with dedicated sensing intervals of the applications has to perform sensing and data packet transmission more frequently than using a consolidated sensing interval selected by our model.
With the selected consolidated sensing interval following Theorem 1, we have: $$I_{se}^{c} = I_{se}^{min} \leq I_{se}^{min}$$
According to Lemma 2, sensor nodes running with the selected consolidated sensing interval $I_{se}^{c}$ satisfy all applications in A.
For any sensing interval of $I_{se}^{c^{\prime}} < I_{se}^{min}$ to satisfy all applications in A according to Lemma 2, the number of sensings $N_{s}^{c^{\prime}}$ and the number of data packet transmissions $N_{packet}^{c^{\prime}}$ in a period of time T are calculated as follows. $$N_{s}^{c^{\prime}} = N_{packet}^{c^{\prime}} = T/I_{se}^{c^{\prime}} > T/I_{se}^{min} = T/I_{se}^{c}$$
From Equations (5)--(11), Theorem 1 is proven.
Based on the proven Theorem 1, we have the aggregation function for sensing interval requests as follows. $$aggregation{(I_{se}^{A})} = \min{(I_{se}^{\alpha_{1}},I_{se}^{\alpha_{2}},...,I_{se}^{\alpha_{N}})}$$
The aggregator uses the above aggregation function to determine the optimal consolidated sensing interval for a set of sensor nodes, which minimizes their numbers of sensing and sensing packet transmissions while satisfying the sensing requests of all applications.
4. A Sensing Update Request-Based Adaptive Low Power Listening Protocol {#sec4-sensors-16-00992}
=======================================================================
Another objective of our interactive model is to minimize the energy consumption of sensor nodes while satisfying the sensing requests of all applications. Therefore, in addition to the usage of the optimal consolidated sensing interval determined by the aggregator above, a physical sensor also needs to adapt its wakeup schedule based on the updates of the consolidated sensing interval. We assume physical sensors running with a low power listening protocol (LPL) \[[@B45-sensors-16-00992],[@B46-sensors-16-00992],[@B47-sensors-16-00992]\], which is one of the most popular energy-efficient protocols deployed for WSNs. In this section, we propose a sensing update request-based adaptive low power listening protocol (SLPL) for sensor nodes. A part of the protocol is based on our previous work \[[@B46-sensors-16-00992]\].
Each time a sensing update request is received from the PSM, the physical sensors change their sensing interval accordingly. This means that the traffic produced by each node and incoming traffic to each node are also changed. To optimize energy consumption, the sensors should optimize their low power listening parameters accordingly. For energy efficiency, we propose two adaptive modes for the adaptive protocol: active adaptive mode and lazy adaptive mode. The active mode is used to enable a sensor to adapt its parameters quickly upon changes in network traffic. The lazy mode minimizes the number of traffic samplings to save energy when the network traffic is stable.
4.1. Adaptive LPL Triggering Event {#sec4dot1-sensors-16-00992}
----------------------------------
When a node receives a sensing interval update request, the traffic network condition at the node is implicitly notified to change. The sensor then switches its adaptive mode to active mode. The sensing interval update request triggers such an event. Note that not only physical nodes that are requested to update their sensing interval, but also intermediate nodes that forward the request also switch to the active mode. The reason is that sensing interval changes at requested nodes also affect incoming traffic at their intermediate nodes.
When a sensor observes that its total traffic becomes stable in a period of *ψ* cycles, it switches its adaptive mode to the lazy mode to save energy.
4.2. Active Mode {#sec4dot2-sensors-16-00992}
----------------
In the active mode, a node measures the incoming data rate more frequently to quickly observe how much its total traffic changes to adapt its LPL parameters accordingly. We assume that a traffic measurement interval of a sensor in the active mode is $\omega_{active}$, which is shorter than that of the lazy mode.
4.3. Lazy Mode {#sec4dot3-sensors-16-00992}
--------------
In lazy mode, a node lazily performs data rate measurement to save energy. The reason is that when the network traffic becomes fairly stable and no nodes are requested to update their sensing interval, LPL parameter adaptation is normally insignificant. The benefit of such a parameter adaptation may be not considerable compared to the traffic measurement cost. In addition, the lazy mode is applied to achieve the stability of the system. We assume that the traffic measurement interval of a sensor in lazy mode is $\omega_{lazy}$, which is longer than that of the active mode.
4.4. Revisiting LPL Operations {#sec4dot4-sensors-16-00992}
------------------------------
Low power listening (LPL) \[[@B45-sensors-16-00992],[@B46-sensors-16-00992],[@B47-sensors-16-00992]\] is a common mechanism that has been greatly explored in designing energy-efficient MAC protocols. Although there are several different LPL implementations, their basic design is quite similar. In LPL, a node periodically wakes up (after a sleep interval $I_{s}$) to perform receive checks (CCA), as illustrated in [Figure 3](#sensors-16-00992-f003){ref-type="fig"}a. If there is no channel activity detected, the node then turns off its radio. If the channel is busy, the node wakes up fully and remains active for a wakeup period $T_{w}$ to listen for incoming packets, as shown in [Figure 3](#sensors-16-00992-f003){ref-type="fig"}b. In [Figure 3](#sensors-16-00992-f003){ref-type="fig"}b, the node receives several packets (i.e., $p_{1},p_{e1},p_{e}2$). Before transmitting a packet, senders send preambles until their receiver wakes up. For each packet received, a receiver extends its active time by an extended period $T_{e}$ ([Figure 3](#sensors-16-00992-f003){ref-type="fig"}b), because there may be more than one incoming packet or sender. However, whether a packet is an extending packet or not depends on the packet received time. This will be further analyzed in the next subsections.
4.5. Motivations for the Sensing Update Request-Based Adaptive LPL Protocol {#sec4dot5-sensors-16-00992}
---------------------------------------------------------------------------
The energy consumption of an LPL protocol depends on the values of the parameters $I_{s},T_{e}$ and $T_{w}$, as discussed above. How long a node should sleep, wake up and remain awake in a cycle depends on the total traffic it has to process. When a new application is registered with a request for a new sensing interval or when an existing application updates its sensing request with a new sensing interval, the aggregator in the sensor-cloud determines whether or not a sensing update request is required to send to a set of physical sensors. If a new consolidated interval is found, the aggregator sends a sensing update request to the set of physical sensors, which requires the sensors to change their sensing interval accordingly. This means that the number of sensing packets produced by those source nodes will be changed, which obviously affects the scheduling of the nodes. For example, when a node updates its sensing interval with a lower value, it produces and forwards sensing packets more frequently. When the number of sensing packets generated by a source node is changed, the incoming traffic at intermediate nodes on the way to the sensor-cloud through the sink is also changed. This requires all source nodes and corresponding intermediate nodes to adapt their LPL parameters for energy optimization. Each parameter will be affected accordingly as follows when the sensing interval of a node is changed upon a sensing update request.
*The sleep interval* ($I_{s}$) : The sleep interval value indicates how frequently a node wakes up for receiving checks in a period of time. With a short sleep interval, a receiver node has to perform receive checks for incoming packets more frequently, which leads to a high energy consumption at the receiver side. However, a short sleep interval at the receiver side helps shorten the preamble transmission duration of its sender nodes, thus lowering the senders' energy consumption. Based on this characteristic of $I_{s}$, when the updated sensing interval of source nodes is shorter than the previous one, decreasing $I_{s}$ at intermediate nodes may be a benefit to reduce the total energy consumption of senders and their receivers. With a long sleep interval, energy consumption at the receiver side is reduced, but the energy cost at the sender nodes is increased. Based on this characteristic of $I_{s}$, when the updated sensing interval of source nodes is longer and the number of packets sent by a sender is smaller, increasing $I_{s}$ may help to reduce total energy consumption.
$T_{w}$ *and* $T_{e}$: The energy consumption characteristic of $T_{w}$ and $T_{e}$ at the receiver side and the sender side is contradictory compared to $I_{s}$. When the updated sensing interval of source nodes is shorter than the previous one, increasing $T_{w}$ and $T_{e}$ at intermediate nodes may provide a benefit to energy consumption optimization as incoming traffic will increase. On the other hand, when the number of packets sent by senders is reduced, decreasing $T_{w}$ and $T_{e}$ may save energy.
4.6. Theoretical Framework for the Sensing Update Request-Based Adaptive LPL Protocol {#sec4dot6-sensors-16-00992}
-------------------------------------------------------------------------------------
We first establish a theoretical framework for the adaptive protocol. The theoretical framework captures the energy consumption characteristics of a receiver node and its sender nodes in a time window $T_{u}$. We mainly focus on the time cost of the radio wakeup of nodes, when most of the energy is consumed.
### 4.6.1. Energy Consumption at the Receiver Side {#sec4dot6dot1-sensors-16-00992}
**For receive checks:** The number of receive checks of a receiver within $T_{u}$ is $N_{rc} = T_{u}/T_{cycle}$. Denote $T_{rc}$ as the duration for a receive check. We calculate the total radio-on time cost of a receiver for receiver checks as follows. $$E_{rc} = T_{rc}T_{u}/T_{cycle}$$
**For packet receiving** ($k > 0$): We now compute the expected receiving time cost in a cycle for a receiver. Because a receiver may lengthen its awake period when it receives a packet, the receiving time cost of a node depends on the following parameters: (1) $T_{w}$; (2) $T_{e}$; (3) the number of received packets and the inter-packet interval. Note that not all received messages trigger an extended awake period of a receiver. We define the following types of packets.
*Extending packets:* are received packets that trigger an extended awake period for the receiver. Extending packets leads to an increase in the awake period of the receiver.
*Data packets with preamble transmission:* data packets are transmitted when the receiver is still sleeping; thus, preamble transmission is required until the receiver wakes up.
*Data packets without preamble transmission:* when both the sender and the receiver are awake, packets are transmitted without a requirement of preamble transmission.
The number of received packets of a node and the inter-packet interval are random variables. Therefore, we compute the radio-on time period of a receiver based on $T_{w},T_{e}$, the probability of k extending packets and the expected inter-packet interval. The probability of k extending packets depends on the correlative value between $T_{w}$ and $T_{e}$.
$T_{w} \geq T_{e}$
*An extending packet should be received after* $t + I_{s} + {(T_{w} - T_{e})}$.
When a receiver wakes up without receiving any packet, its total wakeup period is $E_{a} = T_{w}$ from the point of time $t + I_{s}$ to $t + I_{s} + T_{w}$.
If a packet p is received at time $t_{1}$ before $t + I_{s} + {(T_{w} - T_{e})}$ ($t \leq t_{1} \leq t + I_{s} + {(T_{w} - T_{e})}$), the receiver will extend its awake period until at least $t_{1} + T_{e}$. However, $t_{1} + T_{e} < t + I_{s} + {(T_{w} - T_{e})} + T_{e} = t + I_{s} + T_{w}$. The result indicates that receiving the packet p does not lead to an increase in the awake period of the receiver. According to the definition of an extending packet, p is not an extending packet.
If a packet p' is received at time $t_{2}$ after $t + I_{s} + {(T_{w} - T_{e})}$, the receiver will lengthen its awake period until at least $t_{2} + T_{e} > t + I_{s} + {(T_{w} - T_{e})} + T_{e} = t + I_{s} + T_{w}$. The extended period is equal to $t_{2} + T_{e} - {(t + I_{s} + T_{w})} > 0$. According to the definition of an extending packet, p' is an extending packet.
We use $N_{t}^{t^{\prime}}$ and $t_{i}$ to stand for the number of received packets in a time period from *t* to $t^{\prime}$ and the arrival time of packet $i_{th}$, respectively. The awake period of a receiver is lengthened if it receives at least one packet during the period from $t + I_{s} + {(T_{w} - T_{e})}$ to $t + I_{s} + T_{w}$ ($N_{t + I_{s} + {(T_{w} - T_{e})}}^{t + I_{s} + T_{w}} > 0$).
*The inter-packet interval between two consecutive extending packets* $P_{(i + 1)}$ *and* $P_{i}$ *should not be greater than* $T_{e}$.
After receiving an extending packet $P_{i}$ at time $t_{i}$, the receiver will turn off its radio and go to sleep if it does not receive any packet during the period from $t_{i}$ to $t_{i} + T_{e}$.
If a packet $P_{k + 1}$ arrives at $t_{k + 1}$ with $t_{k + 1} - t_{k} > T_{e}$, the receiver will not receive it because it is sleeping; thus, $P_{k + 1}$ is not an extending packet.
The probability for k extending packets is calculated as follows: $$\begin{array}{cl}
{P_{T_{w} \geq T_{e}}{(k)} = P{(N_{t + I_{s} + {(T_{w} - T_{e})}}^{t + I_{s} + T_{w}} > 0)}} & {\bigwedge\limits_{i = 1}^{k - 1}{(t_{i + 1} - t_{i} \leq T_{e})}} \\
& {\bigwedge(t_{k + 1} - t_{k} > T_{e})} \\
\end{array}$$
We have $P_{T_{w} \geq T_{e}}{(0)} = P{(N_{t + I_{s} + {(T_{w} - T_{e})}}^{t + I_{s} + T_{w}} = 0)}$.
We now measure the expected inter-packet interval between two consecutive extending packets. $$\overline{T_{ip}{(T_{ip}^{max})}} = \int_{0}^{T_{ip}^{max}}TP{(T_{ip} = T|N_{0}^{T_{ip}^{max}} > 0)}dT$$ where $T_{ip}^{max}$ is the maximum inter-packet interval. In this case, $T_{ip}^{max} = T_{e}$. $P(T_{ip} = T)$ is the probability that the inter-packet interval is equal to T.
We now can calculate the expected total awake period $E_{a1}{(k)}$ of a receiver with k extending packets as follows: $$E_{a1}{(k)} = \begin{cases}
T_{w} & {\text{if} k = 0} \\
{T_{w} + k\overline{T_{ip}{(T_{e})}}} & {\text{otherwise}} \\
\end{cases}$$
$T_{w} < T_{e}$:
Any packet received during the awake period of the receiver is an extending packet.
If the receiver does not receive any packet in a cycle, its total awake time period is $T_{w}$ from the point of time $t + I_{s}$ to $t + I_{s} + T_{w}$. If the receiver receives a packet at time *t*' during its wakeup period ($t^{\prime} \geq t + I_{s}$), the receiver will set its awake period until at least $t^{\prime} + T_{e}$. Because $t^{\prime} + T_{e} \geq t + I_{s} + T_{w}$, p is an extending packet.
In other words, the wakeup period of the receiver is extended if it receives at least one packet $T_{w}$ ($N_{t + I_{s}}^{t + I_{s} + T_{w}} > 0$).
Theorem 3 is also applied for this case.
We then have the probability of k extending packets: $$\begin{array}{cl}
{P_{T_{w} < T_{e}}{(k)} = P{(N_{t + I_{s}}^{t + I_{s} + T_{w}} > 0)}} & {\bigwedge\limits_{i = 1}^{k - 1}{(t_{i + 1} - t_{i} \leq T_{e})}} \\
& {\bigwedge(t_{k + 1} - t_{k} > T_{e})} \\
\end{array}$$
The expected inter-packet interval is also computed using (16). The expected total awake period $E_{a2}{(k)}$ of the receiver with k extending packets in a cycle is computed as follows: $$E_{a2}{(k)} = \begin{cases}
T_{w} & {\text{if} k = 0} \\
{\overline{T_{ip}{(T_{w})}} + {(k - 1)}\overline{T_{ip}{(T_{e})}} + T_{e}} & {\text{otherwise}} \\
\end{cases}$$ where $\overline{T_{ip}{(T_{w})}}$ is the expected time period from the time the receiver wakes up to the time it receives the first packet. In case $N_{t}^{t + I_{s}} > 0$, there are data packets with preamble transmission, and the first extending packet may be received immediately when the receiver wakes up; thus, $T_{ip}{(T_{w})}$ can be equal to zero.
From Equations (16) and (18), we compute the expected total awake period of a receiver as follows: $$E_{a} = \begin{cases}
{\sum_{k = 0}^{\infty}E_{a1}{(k)}P_{T_{w} \geq T_{e}}{(k)}} & {\text{if}{T_{w} \geq T_{e}}} \\
{\sum_{k = 0}^{\infty}E_{a2}{(k)}P_{T_{w} < T_{e}}{(k)}} & {\text{otherwise}} \\
\end{cases}$$
We now have the expected duty cycle length: $$T_{cycle} = I_{s} + E_{a}$$
From Equations (13) and (19), we compute the total radio-on time period of the receiver in a time window of $T_{u}$ as follows: $$E_{receiver} = {(E_{rc} + E_{fw} + E_{a})}T_{u}/T_{cycle}$$
### 4.6.2. Energy Consumption at Senders {#sec4dot6dot2-sensors-16-00992}
The radio-on time cost of senders to send packets to the receiver depends on the total number of packets consisting of packets with preamble transmission ($N_{p}$) and non-preamble transmission ($N_{non}$).
The expected number of packets with preamble transmissions depends on $I_{s}$ and the traffic rate $R_{p}$, which is changed when the sensing interval update is requested. $N_{p}$ is computed as follows. $$N_{p} = R_{p}I_{s}T_{u}/T_{cycle}$$
The expected number of packets without preamble transmission (i.e., $N_{non}$) depends on the wakeup period of the receiver and the probability of k received packets. In case of $T_{w} \geq T_{e}$, $N_{non}$ includes packets received in a period ($t + I_{s},t + I_{s} + {(T_{w} - T_{e})}$) and extending packets. In case of $T_{w} < T_{e}$, one of the received extending packets may be a packet with preamble transmission if $P(N_{t}^{t + I_{s}}) > 0$. Therefore, $N_{non}$ is computed as follows. $$\begin{array}{r}
{N_{non} = \left\{ \begin{matrix}
{\{\sum_{x = 0}^{\infty}xP{(N_{t + I_{s}}^{t + I_{s} + {(T_{w} - T_{e})}} = x)} +} \\
{\sum_{k = 0}^{\infty}P_{T_{w} \geq T_{e}}{{(k)}k\}}T_{u}/T_{cycle}~\text{if}{T_{w} \geq T_{e}}} \\
\\
{{\{\sum_{k = 0}^{\infty}P_{T_{w} < T_{e}}{(k)}k - P{(N_{t}^{t + I_{s}} > 0)}1\}}T_{u}/T_{cycle}} \\
{\text{otherwise}} \\
\end{matrix} \right.} \\
\end{array}$$
We assume that the cost for sending a packet without preamble transmission is *β* s. The expected transmission duration of a packet with preamble transmission is $I_{s}/2$. The total awake period for sending packets is then: $$E_{senders} = N_{p}I_{s}/2 + {(N_{p} + N_{non})}\beta$$
### 4.6.3. Expected Energy Consumption {#sec4dot6dot3-sensors-16-00992}
We denote *γ*, *η* and *δ* as the energy consumption rates for the receive check, for listening/receiving and for sending. The expected energy consumption to receive and send packets is computed as follows: $$f{(I_{s},T_{w},T_{e})} = E = \gamma E_{rc} + \eta E_{a} + \delta E_{senders}$$
Our goal is to optimize E to achieve the minimum energy consumption of the receiver and senders. We use Equation (25) as the guideline for our adaptive protocol design.
### 4.6.4. Illustration to Calculate E {#sec4dot6dot4-sensors-16-00992}
E can be easily obtained based on a specific distribution of traffic. We assume the traffic follows a memoryless Poisson distribution with the independence of inter-packet interval $y_{i} = t_{i + 1} - t_{i}$. As intervals between events y has an exponential distribution, we have $f{(y)} = R_{p}e^{- R_{p}y}$. Following the Poisson distribution, we also have the probability $P{(N_{t + I_{s})}^{t + I_{s} + T_{e}} > 0)} = 1 - e^{- R_{p}T_{e}}$. From the above results and Equation (14), we have: $$P_{T_{w} \geq T_{e}}{(k)} = P{(N_{t + I_{s} + {(T_{w} - T_{e})}}^{t + I_{s} + T_{w}} > 0)}{(\prod\limits_{i = 1}^{k - 1}\int_{0}^{T_{e}}f{(y_{i})}dy_{i})}\int_{T_{e}}^{\infty}f{(y_{k})}dy_{k} = {(1 - e^{- R_{p}T_{e}})}^{k}e^{- R_{p}T_{e}}$$ $$P_{T_{w} < T_{e}}{(k)} = P{(N_{t + I_{s}}^{t + I_{s} + T_{w}} > 0)}{(\prod\limits_{i = 1}^{k - 1}\int_{0}^{T_{e}}f{(y_{i})}dy_{i})}\int_{T_{e}}^{\infty}f{(y_{k})}dy_{k} = {(1 - e^{- R_{p}T_{w}})}{(1 - e^{- R_{p}T_{e}})}^{k - 1}e^{- R_{p}T_{e}}$$
Similarly, we can calculate other values. The traffic rate $R_{p}$ can be measured directly. Finally, we obtain $E = f(I_{s},T_{w},T_{e})$. We later show how to use Equation (25) to optimize $I_{s},T_{w}$ and $T_{e}$ to minimize E.
4.7. Energy Consumption Minimization Problem {#sec4dot7-sensors-16-00992}
--------------------------------------------
From the result of Equation (25), we formulate the energy consumption minimization problem as follows:
**Objective function:** $$\begin{array}{clcl}
& \text{minimize} & & {f(I_{s},T_{w},T_{e})} \\
\end{array}$$ **Subject to:** $$I_{s} \geq 0$$ $$T_{w} \geq 0$$ $$T_{e} \geq 0$$
We solve the minimization problem by using the extreme value theory to find the optimal values of the LPL temporal parameters (i.e., $I_{s0},T_{w0},T_{e0}$) so that the minimum energy consumption is achieved. We have the gradient vector of *f* as: $$\overset{\longrightarrow}{\bigtriangledown}f = \left( {\frac{\partial f}{\partial I_{s}},\frac{\partial f}{\partial T_{w}},\frac{\partial f}{\partial T_{e}}} \right)$$ a vector of first order partial derivatives.
Because *f* achieves the extreme value at ($I_{s0},T_{w0},T_{e0}$), we have $\overset{\longrightarrow}{\bigtriangledown}f{(I_{s0},T_{w0},T_{e0})} = 0$. As a result, we have: $$\frac{\partial f}{\partial I_{s}}{(I_{s0},T_{w0},T_{e0})} = 0$$ $$\frac{\partial f}{\partial T_{w}}{(I_{s0},T_{w0},T_{e0})} = 0$$ $$\frac{\partial f}{\partial T_{e}}{(I_{s0},T_{w0},T_{e0})} = 0$$
By solving Equations (33)--(35) under the Constraints (29)--(31), we find the optimal values of the LPL temporal parameters ($I_{s0},T_{w0},T_{e0}$). We can check whether or not the obtained results lead to the minimum of the function *f* (i.e., the minimum energy consumption) by using the second derivation test with the Hessian matrix (H) based on the extreme value theory. $$H = \begin{bmatrix}
\frac{\partial^{2}f}{\partial I_{s}^{2}} & \frac{\partial^{2}f}{\partial I_{s}\partial T_{w}} & \frac{\partial^{2}f}{\partial I_{s}\partial T_{e}} \\
\frac{\partial^{2}f}{\partial T_{w}\partial I_{s}} & \frac{\partial^{2}f}{\partial T_{w}^{2}} & \frac{\partial^{2}f}{\partial T_{w}\partial T_{e}} \\
\frac{\partial^{2}f}{\partial T_{e}\partial I_{s}} & \frac{\partial^{2}f}{\partial T_{e}\partial T_{w}} & \frac{\partial^{2}f}{\partial T_{e}^{2}} \\
\end{bmatrix}$$ where the derivatives are evaluated at ($I_{s0},T_{w0},T_{e0}$).
4.8. Adaptive Operations {#sec4dot8-sensors-16-00992}
------------------------
### 4.8.1. Traffic Rate Measurement {#sec4dot8dot1-sensors-16-00992}
Theoretically, a node can calculate its incoming traffic rate based on the sensing intervals of nodes belonging to its subtree. However, this approach may be inefficient and not scalable when a subtree rooted by a node may have a great number of nodes. For efficiency, to measure the traffic rate $R_{p}$, we use a counter to count the number of incoming packets $N_{p}$ in a time window T. $R_{p}$ is then calculated by $R_{p} = N_{p}/T$. In the case of the active mode when nodes receive a sensing interval update request, traffic measurement is performed every interval of $\omega_{active}$. In lazy mode, the interval is increased to $\omega_{lazy}$, which is longer than $\omega_{active}$ for saving energy. When the traffic changes significantly, a node will perform LPL parameter adaptation to minimize its energy consumption.
### 4.8.2. LPL Parameter Adaptation {#sec4dot8dot2-sensors-16-00992}
In our LPL adaptive protocol, a receiver node adapts its LPL temporal parameters to optimize the total energy consumption of itself and its senders whenever a significant change in traffic rate is detected. The traffic rate change is normally notified by the sensing interval update request. The optimization is based on the above theoretical framework, which computes the optimal values for $I_{s},T_{w}$ and $T_{e}$. When a new set of values $(I_{s},T_{w}$ and $T_{e}$) is discovered by a node, the node then adjusts its LPL temporal parameters accordingly to optimize its energy consumption. The detailed operations for parameter calculation and exchange are discussed in \[[@B46-sensors-16-00992]\].
5. Performance Evaluation {#sec5-sensors-16-00992}
=========================
To evaluate the proposed system, we conduct extensive simulations as follows. Simulations consist of 120 sensor nodes with one sink node, one sensor-cloud and 10 different applications. The sensor-cloud is built as a Java-based web service running on a Core i5 desktop PC with 8 GB RAM, which provides sensing data to different applications. Sensing data received by the sensor-cloud are stored in a database. The SSaaS has a menu of 10 different sensing interval options from 120 s to 2 s, including the following options (120 s, 60 s, 40 s, 30 s, 25 s, 20 s, 15 s, 10 s, 5 s, 2 s). Simple applications are implemented using Java, which send sensing requests to the sensor-cloud. Each application in an ascending order (i.e., from first to 10th) joins to use the sensing service of the sensor-cloud at a random hour. When an application is registered, it sends a sensing request to the web service with a selected sensing interval among the options recommended by the sensor-cloud. For fairness, each application will select a sensing interval randomly.
We use three types of sensors, including temperature sensors, humidity sensors and pressure sensors. Each type of sensor consists of 40 nodes, which are deployed randomly and use the same sampling frequency. Each sensor belonging to a type is assigned with a multicast address. We use multicast to disseminate sensing interval update requests to specific physical sensors. Each application is assumed to request a type of sensing data among the three above. Virtual sensors are implemented as objects in the web service. Application requests can be encoded in the form of XML templates, which are decoded by the SensorML interpreter \[[@B48-sensors-16-00992],[@B49-sensors-16-00992]\]. Based on application requests, the SSaaS allocates corresponding virtual sensors to the applications. If the request aggregator determines the need for a sensing interval update for corresponding physical sensors, an HTTP-based request will be sent to the sink node where the HTTP-CoAP converter \[[@B50-sensors-16-00992]\] presented in our previous work will convert the HTTP request to a CoAP request. The request is then sent to corresponding physical sensor nodes.
Collection tree protocol (CTP) is used as a data collection protocol \[[@B51-sensors-16-00992]\] for physical sensor nodes. According to CTP, sensor nodes form a tree-based topology toward the sink node. Sensing data are gathered at the sink node; then, the sink forwards data packets to the sensor-cloud to serve applications. All schemes use LPL \[[@B49-sensors-16-00992]\] in the lower layer for energy efficiency. We implement our adaptive LPL on the top of the existing TinyOS LPL MAC protocol \[[@B49-sensors-16-00992]\]. The implementation consists of three main components: traffic rate estimator, parameter optimizer and duty cycling adapter. The traffic rate estimator operates based on the triggering event of sensing interval update requests. If the estimator detects a significant change (i.e., a change is over $5\%$) in the traffic, it triggers a call to the parameter optimizer. The parameter optimizer performs a calculation for the optimal values of LPL temporal parameters. To avoid complexity, we pre-compute the optimal values for those parameters under different rates corresponding to the range of sensing interval options offered by the sensor-cloud. Each node stores those values locally. The parameter optimizer uses those values to search for the optimal values of the LPL parameters in each specific case. If the optimizer finds any change in the optimal setting, it calls the adapter to adjust the duty cycle parameters accordingly.
5.1. System Configuration {#sec5dot1-sensors-16-00992}
-------------------------
To ensure realistic simulation evaluation, we use the radio noise model based on closest-fit-pattern matching (CPM) and an experimental noise trace (i.e., meyer-heavy.txt) from the Meyer Library at Stanford University \[[@B52-sensors-16-00992]\]. We keep default CCA checks of the TinyOS LPL up to 400 times. [Table 2](#sensors-16-00992-t002){ref-type="table"} presents the detailed parameters used in our simulations. Other parameters are set to the default values of the TOSSIM radio model for CC2420.
5.2. Performance Metrics {#sec5dot2-sensors-16-00992}
------------------------
We evaluate the system in terms of: (1) energy efficiency; (2) bandwidth consumption; (3) delay; (4) reliability; and (5) scalability
*Energy efficiency*: we use the average radio duty cycle as an indicator of energy efficiency, because most of the energy in a sensor node is consumed by its radio module.
*Bandwidth consumption*: we measure the number of messages forwarded between the sink and the sensor-cloud to indicate the bandwidth consumption of the system.
*Delay*: Because the sink-to-sensor-cloud packet delivery delay is normally negligible compared to the packet delay in constrained sensor networks, we report only the sensor-to-sink data packet delivery delay \[[@B53-sensors-16-00992]\].
*Reliability*: we measure the ratio between the number of successful delivered packets to the sink node and the total number of forwarded packets to indicate the reliability of the system.
*Scalability*: we test the system with various numbers of applications to show the scalability of the system.
5.3. Results {#sec5dot3-sensors-16-00992}
------------
We compare the proposed system to (1) dedicated application requests with in-network data packet aggregation (D-aggregation), where in-network aggregation \[[@B11-sensors-16-00992]\] is applied for sensing traffic to improve energy efficiency; and (2) dedicated application requests with multi-task optimizations (D-opt) \[[@B12-sensors-16-00992]\], where each sensing request sent to physical sensors is treated as a task and multi-task optimization techniques are applied to improve efficiency. We report the average results of 100 different runs.
### 5.3.1. Dynamic Number of Applications and Traffic Loads {#sec5dot3dot1-sensors-16-00992}
[Figure 4](#sensors-16-00992-f004){ref-type="fig"} presents the average duty cycle of the three approaches under a different number of applications. The duty cycle of physical nodes using the two dedicated sensing request approaches increases quickly when the number of applications increases. When a new application that requests a new sensing interval joins the sensor-cloud, sensors are requested to perform an additional sensing task to serve the application, which incurs more workload for constrained sensor devices. A single sensor node may have to run multiple sensing tasks to serve different applications. This means that the node has to wake up more frequently to perform sensing and transmit sensing packets. As a result, its energy consumption is highly proportional to the number of applications. In the D-opt approach, multi-task optimization \[[@B12-sensors-16-00992]\] is performed to reduce the number of sensing samplings. This technique helps D-opt achieve a better energy efficiency compared to D-aggregation. By performing request aggregation on the sensor-cloud, the proposed approach achieves the highest energy efficiency. In particular, the duty cycle of sensor nodes is maintained under $5\%$, even if they have to serve up to 10 different applications. The result indicates that the duty cycle of sensor nodes is not impacted much by the number of applications. This is due to the fact that the proposed system enables each sensor to run only a single sensing task with a consolidated sensing interval, which satisfies all of the applications, thus reducing the workloads for physical sensor nodes considerably. The proposed system achieves a significant improvement in terms of energy efficiency compared to the two dedicated approaches. The improvement increases proportionally with the number of applications. Within one application, the duty cycle of the proposed system is only slightly lower than that of the two approaches, because the scheduling of sensor nodes in the proposed system is adapted to minimize energy consumption. When the number of applications increases to 10, the improvement of the proposed system increases to $54\%$ compared to D-opt, and $86\%$ compared to D-aggregation.
In [Figure 5](#sensors-16-00992-f005){ref-type="fig"}, we show the duty cycle performance of the proposed system in a specific test case where sensing intervals requested by applications from one to 10 are 60 s, 120 s, 30 s, 40 s, 2 s, 5 s, 10 s, 15 s, 20 s and 25 s, respectively. The purpose of showing this figure is to provide an understanding about the performance behaviors of the proposed system in relation to different sensing interval requests from applications. The figure shows clearly that the performance of our system does not depend on the number of applications, but the traffic load of sensor nodes. Within the first application that has requested a sensing interval of 60 s, the duty cycle of sensor nodes is about $1.6\%$ on average. When the second application with a requested sensing interval of 120 s is deployed, the duty cycle of the sensor nodes does not change. Sensing interval requests sent by the second application are hidden from sensor nodes because the request aggregator determines that sensing requests by the second application can be satisfied by the current consolidated sensing interval. As a result, there is no change in physical sensor nodes. When the third application is deployed with a requested sensing interval of 30 s, the current consolidated sensing interval is longer and does not satisfy the requirements of the third application. The request aggregator finds a new consolidated sensing interval and sends a sensing update request to corresponding sensor nodes. With a shorter sensing interval, the traffic load of sensors increases, and the system adjusts their LPL parameters to adapt to the new traffic condition (i.e., shorten $I_{s}$). As a result, the average duty cycle of sensor nodes increases from $1.6\%$ to $1.9\%$. Similar to the case of the second application, sensing requests by the fourth application are hidden from physical nodes. The fifth application has a much shorter requested sensing interval, which leads to a considerable increase in the sensing traffic of sensor nodes. As a result, the average duty cycle result jumps to $4.2\%$. After that, the average duty of sensor nodes remains stable at this value. The consolidated sensing interval of sensor nodes does not change because later applications (6th, 7th, 8th, 9th and 10th) have requested sensing intervals longer than the current consolidated sensing interval. The current consolidated sensing interval can satisfy all of the later applications. While the duty cycle of the proposed system does not change when the 6th, 7th, 8th, 9th and 10th applications are deployed, the duty cycles of the two dedicated approaches keep increasing considerably. In terms of energy consumption, the scalability and energy efficiency of the two dedicated approaches are quite low. The reason is that if there are many applications requesting such a sensing service, the network lifetime of the physical sensor nodes will be short. This incurs a high cost and difficulty in sensor deployment for sensor owners. Through this experiment, we conclude that the energy consumption of the proposed system does not depend on the number of applications, but the length of the consolidated sensing interval instead. This characteristic of the proposed system enables a single sensor network to serve multiple applications with a controllable expected lifetime (i.e., by defining allowable sensing interval options).
Physical sensor utilization and bandwidth consumption are two main factors in calculating the price of sensor-cloud services \[[@B4-sensors-16-00992]\]. [Figure 6](#sensors-16-00992-f006){ref-type="fig"} shows the bandwidth consumption of different approaches in correlation with the number of applications. Within one application, the bandwidth utilization of the three approaches is similar. When the number of applications increases, the bandwidth consumption of the two dedicated approaches increases rapidly. Although sensing traffic optimization using aggregation or task optimization is applied, the number of packets forwarded between the sink node and the sensor-cloud in the two dedicated approaches is still much higher compared to that of our proposed system. The gap between the bandwidth consumption of the proposed system and the two dedicated schemes increases proportionally to the number of applications. The reason is that while the traffic load of sensor nodes in the two dedicated schemes increases significantly when a new application with a new sensing interval is deployed, the traffic load of sensor nodes in the proposed system is not impacted by the number of applications, but by the length of the consolidated sensing interval, as shown in [Figure 5](#sensors-16-00992-f005){ref-type="fig"}. Sensing requests from new applications with a longer sensing interval compared to the consolidated sensing interval are hidden from sensor nodes, thus saving a considerable amount of bandwidth. When all ten applications are deployed, the bandwidth consumption of the two dedicated schemes is almost double the bandwidth consumption of the proposed scheme. The results also indicate that the proposed request aggregation technique on the sensor-cloud is even more efficient compared to in-network aggregation and multi-task optimization and can be a promising technique to complement in-network aggregation to save network resources.
Packet delivery delay in wireless sensor networks is highly proportional to the traffic load due to the narrow band usage. [Figure 7](#sensors-16-00992-f007){ref-type="fig"} illustrates the correlation between the average packet delivery delay of the sensor-to-sink traffic with the number of applications. Within a few applications with low sensing traffic deployed, the packet delay of the proposed system is slightly higher than the other two schemes. This is because the adaptive mechanism in the proposed system automatically adapts the LPL temporal parameters in a low traffic condition (i.e., lengthen $I_{s}$) to save energy. However, when network traffic increases due to the deployment of more new applications with shorter requested sensing intervals, packet delivery delay in the proposed system significantly decreases. The reason is that the adaptive protocol adjusts the LPL temporal parameters (i.e., shorten $I_{s}$) to adapt to the new and high network traffic condition. When the network traffic increases (i.e., notified by new sensing update request), the proposed adaptive protocol automatically shortens the sleep interval of sensor nodes accordingly to quickly forward packets to save the energy of senders and reduce the overall packet delay. Even though multiple applications are deployed, the traffic load of sensor nodes is not exceeded, as requested sensing intervals from different applications are aggregated into a single consolidated sensing interval; thus, high traffic load is not a serious problem impacting on the packet delay in our proposed system. On the contrary, packet delivery delay in the two dedicated approaches is highly impacted by the number of deployed applications. Each deployed application puts more traffic load toward the sensor nodes. As a result, a node has to forward more and more packets within a cycle while scheduling parameters are fixed and channel bandwidth is limited. This is the main reason that leads to a significant increase in packet delay when the number of applications increases.
Rapid traffic load increasing when the number of applications increases also causes a considerable increment in the packet loss of the two dedicated schemes, as shown in [Figure 8](#sensors-16-00992-f008){ref-type="fig"}. Within one application, the packet loss ratio witnessed for all of the schemes is similar. However, when the number of deployed applications increases, the gap between packet loss ratios of the proposed system and the two other schemes is widened. By comparing [Figure 6](#sensors-16-00992-f006){ref-type="fig"} and [Figure 8](#sensors-16-00992-f008){ref-type="fig"}, we see that the packet loss ratio of a scheme is highly correlated with its traffic load. Note that the bandwidth consumption also reflects the total traffic load of sensor nodes. With the excessive increase in the traffic load of each sensor when more applications are deployed under fixed scheduling parameters and limited channel bandwidth, packet loss ratio graphs of the two dedicated schemes grow noticeably. In the case of the proposed system, although when a new shorter consolidated interval is found and the traffic load also increases accordingly, the packet loss ratio remains at a low level. This is due to the following reasons: (1) the traffic load increasing of a node in the proposed system is negligible compared to the two dedicated approaches; and (2) when traffic load increases, the adaptive protocol adapts the scheduling parameters to enable a node to provide a higher capability in forwarding packets.
### 5.3.2. Scalability Test {#sec5dot3dot2-sensors-16-00992}
We are now interested in evaluating the scalability of the systems. To compare the scalability of different schemes, we define a reliability requirement by assuming a lower bound value for the successful packet delivery ratio of sensing data packets within $90\%$. For each scheme, we do experiments by increasing the number of applications until the reliability of the scheme falls down lower than $90\%$. Each application randomly selects a sensing interval in the range of \[2 s, 120 s\]. The scalability of the scheme is defined as the number of applications that the scheme can support while the reliability requirement is achieved. Results are reported in [Figure 9](#sensors-16-00992-f009){ref-type="fig"}. The D-aggregation scheme can support only 16 applications with different requested sensing intervals. Note that in the dedicated approaches, the sensor-cloud enables the reusability of sensing traffic for only applications with the same requested sensing interval. D-opt has a better scalability with 28 applications. The two schemes show a limited scalability because their traffic load is proportional to the number of applications. The proposed system achieves the highest scalability among the three schemes. In particular, the system still achieves the reliability requirement when 100 applications with different sensing intervals are deployed. In fact, the scalability of the system does not depend on the number of applications. With the lower bound of a sensing interval of 2 s, we believe the sensor network can support an unlimited number of applications as long as the sensor-cloud scales well. We find that the scalability of the proposed system depends more on the minimum consolidated sensing interval instead. To support this statement, we extend the sensing interval range by decreasing the lower bound of the sensing interval to 0.5 s. We find that the reliability of the system is reduced significantly when a number of applications request the sensing interval at 0.5 s.
### 5.3.3. Economics of the Model {#sec5dot3dot3-sensors-16-00992}
According to the pricing model of the sensor-cloud \[[@B4-sensors-16-00992],[@B24-sensors-16-00992]\], applications can request sensing services on-demand, and they are priced as usage sensitive or the pay-per-use model based on the costs of sensor usage and cloud usage cost. The economics of our system is justified as follows. Firstly, the proposed system helps reduce the cost of sensor usage by reducing the number of sensing requests sent to sensor nodes and the energy consumption of sensor nodes per application. Many application sensing requests can be hidden from sensor nodes by the request aggregator. As a result, the proposed system improves the network lifetime and, thus, reduces the cost of sensor network ownership. Secondly, the proposed system helps decrease the bandwidth consumption significantly to serve the same number of applications compared to the current approaches. Third, by achieving a high scalability, the proposed system enables the sensor-cloud to sell sensing services of a given physical sensor network to a greater number of applications while still satisfying requirements of all applications. This definitely enables sensor-cloud providers and sensor owners to make more profits and helps to cut the price of a sensing service for an application. As a result, our proposed system enables a win-win model for sensor-cloud providers, sensor owners and application owners.
6. Discussion and Conclusions {#sec6-sensors-16-00992}
=============================
This paper presents an efficient interactive model for the sensor-cloud to provide sensing services for multiple applications on-demand. In the interactive model, both the cloud and physical sensor nodes are necessarily involved. The model highlights the role of the cloud in optimizing workloads for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. For that purpose, an aggregator and a request aggregation scheme on the sensor-cloud are proposed, which minimize the number of requests sent to physical sensor nodes, minimizing the number of sensings and the number of sensing packet transmissions required for physical sensor nodes. On the physical sensor side, sensors perform sensing tasks based on the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, a sensing request-based adaptive LPL protocol is proposed to minimize the energy consumption of constrained sensors. Through extensive analysis experiments, we show that the proposed system achieves a significant improvement in terms of network performance and scalability compared to current approaches and enables a win-win model for the sensor-cloud. In this version, the model is presented using sensing frequency (i.e., sensing interval) requests as an example. However, the model can be generalized for any application requirement. In future work, we are going to investigate the model for up-stream sensing traffic for packet delivery latency and reliability guarantees based on on-demand application requests.
This research was supported by the MSIP(Ministry of Science, ICT and Future Planning), Korea, under the ITRC(Information Technology Research Center) support program (IITP-2016-H8501-16-1008) supervised by the IITP(Institute for Information & communications Technology Promotion), and by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) (No. B190-16-2012, Global SDN/NFV Open-Source Software Core Module/Function Development.
Both authors contributed equally to design ideas, analyze, and write the article.
The authors declare no conflict of interest.
![Location-based IoT-cloud integration.](sensors-16-00992-g001){#sensors-16-00992-f001}
![The proposed interactive model for the sensor-cloud.](sensors-16-00992-g002){#sensors-16-00992-f002}
![Low power listening protocol (LPL) operations: (**a**) CCA checks and (**b**) CCA checks with received packets.](sensors-16-00992-g003){#sensors-16-00992-f003}
![Average duty cycle vs. the number of applications.](sensors-16-00992-g004){#sensors-16-00992-f004}
![Average duty cycle vs. the number of applications that request sensing intervals as follows: 60 s, 120 s, 30 s, 40 s, 2 s, 5 s, 10 s, 15 s, 20 s and 25 s, respectively.](sensors-16-00992-g005){#sensors-16-00992-f005}
![Bandwidth consumption vs. the number of applications.](sensors-16-00992-g006){#sensors-16-00992-f006}
![Average packet delivery delay vs. the number of applications.](sensors-16-00992-g007){#sensors-16-00992-f007}
![Average packet loss ratio vs. the number of applications.](sensors-16-00992-g008){#sensors-16-00992-f008}
![Scalability test.](sensors-16-00992-g009){#sensors-16-00992-f009}
sensors-16-00992-t001_Table 1
######
List of symbols.
Parameter Meaning
----------------------------------- ----------------------------------------------------------------------------------------------------------------
$I_{se}^{\alpha}$ Dedicated sensing interval of application *α*
$I_{se}^{c}$ Consolidated sensing interval
*τ* Sensor type
$RI$ Region of interest
Traffic rate ($R_{p}$) The number of incoming data packets in a unit of time (i.e., 1 s)
Sleep interval ($I_{s}$) The sleep period in a cycle
Active period ( $E_{a}$) The total wakeup period in a cycle, which depends on the following two parameter
Periodic wakeup period ( $T_{w}$) The period a node remains awake after waking up in every cycle if the node does not send or receive any packet
Extended wakeup period ($T_{e}$) The extra period a node extends its wakeup time after receiving a packet
Cycle length ($T_{cycle}$) The period between two consecutive sleep times; $T_{cycle} = I_{s} + E_{a}$
Number of received packets (k) The number of packets a node receives in a cycle during its active period.
sensors-16-00992-t002_Table 2
######
Parameters.
Parameter Value Parameter Value
------------------------ ---------- -------------------------- -----------------
Number of cloud 1 Sensing interval options \[2 s, 120 s\]
Number of sensors 120 Number of sink node 1
Number of applications 10 Number of sensor types 3
Data packet length 32 bytes Preamble packet length 9 bytes
Time window T 10 s CCAchecks Up to 400 times
$I_{s}^{TinyOS - LPL}$ 0.5 s Hardware CC2420
$T_{w}^{TinyOS - LPL}$ 10 ms $\omega_{active}$ 2 s
$T_{u}$ 1 s $\omega_{lazy}$ 120 s
$I_{s}^{TinyOS - LPL}$ 0.5 s $T_{e}^{TinyOS - LPL}$ 100 ms
*γ* 18.8 mA *η* 18.8 mA
*δ* 17.4 mA Transmission range 20 m
|
Interleukin-23 promotes a distinct CD4 T cell activation state characterized by the production of interleukin-17.
Interleukin (IL)-17 is a pro-inflammatory cytokine that is produced by activated T cells. Despite increasing evidence that high levels of IL-17 are associated with several chronic inflammatory diseases including rheumatoid arthritis, psoriasis, and multiple sclerosis, the regulation of its expression is not well characterized. We observe that IL-17 production is increased in response to the recently described cytokine IL-23. We present evidence that murine IL-23, which is produced by activated dendritic cells, acts on memory T cells, resulting in elevated IL-17 secretion. IL-23 also induced expression of the related cytokine IL-17F. IL-23 is a heterodimeric cytokine and shares a subunit, p40, with IL-12. In contrast to IL-23, IL-12 had only marginal effects on IL-17 production. These data suggest that during a secondary immune response, IL-23 can promote an activation state with features distinct from the well characterized Th1 and Th2 profiles. |
Background
==========
Cardiovascular disease is the most common cause of death in women in the western world, and among its major modifiable risk factors are hypertension, dyslipidaemia, obesity and type 2-diabetes \[[@B1]\]. Lactation is a factor unique to women that may be associated with all these risk factors, and several studies have shown that it may affect them favourably \[[@B2]-[@B4]\]. Moreover, such advantages may persist several years post-weaning \[[@B5]-[@B13]\]. However, previous studies evaluating the association between lactation and maternal cardiovascular health have suffered from being short-term, having small sample sizes, or samples from selected populations or populations with low breastfeeding rates. Norway has one of the highest breastfeeding rates in Europe, with 80% of the infants still being breastfed at six months \[[@B14]\]. We have therefore studied the association between lifetime duration of lactation and maternal cardiovascular risk factors later in life in a large unselected Norwegian population sample (about 35,000 women) in which breastfeeding was common and breastfeeding duration was long.
Methods
=======
Study population
----------------
The Nord-Trøndelag Health Study (HUNT) is a population-based health survey aiming at the total adult population \>19 years of age in the county of Nord-Trøndelag, Norway. Data collection and methods have been described in detail elsewhere \[[@B15]\]. Briefly, the second HUNT study (HUNT2) took place between 1995 and 1997, and included two self-administered questionnaires and a clinical examination including standardised measurements of height, weight, waist circumference and blood pressure, as well as non-fasting measurements of blood glucose and serum lipids. The first questionnaire was sent by mail along with an invitation for a clinical examination. The questionnaire form included questions about general health and lifestyle, and the participants were requested to bring it to the physical examination. A second, more detailed questionnaire containing queries on number of live births and corresponding lactation history, as well as illnesses, medical treatment, lifestyle and socio-economic factors was distributed during the examination, to be completed at home and returned by mail.
Among 47,312 women invited to HUNT2, a total of 35,280 (75.5%) women participated. For this study, reasons for exclusions were non-response to the second questionnaire (n = 5,061), current pregnancy (n = 605), age \> 85 years (n = 343), non-attendance at the clinical examination (n = 355), self-report of infarction (n = 1), stroke (n = 9), angina (n = 12), diabetes prior to the first live birth (n = 43), less than one year since last child birth (n = 257) or unknown lactation history (n = 2 206), leaving 26,388 women as eligible for the analyses, of whom 21,368 women had given birth to at least one child. For the analyses of blood pressure data we further excluded those who reported current or previous antihypertensive medication (n = 3 061 among parous women, n = 682 among nulliparous women). For the analyses of low density lipoprotein cholesterol we excluded those with triglyceride concentrations of 4.5 mmol/L or more (n = 321 among parous women, n = 78 among nulliparous women).
Lactation history
-----------------
Lactation history was self-reported by the women in the second questionnaire. For each live birth, the women reported the year of birth and corresponding lactation duration in whole months (*"How many months did you breastfeed?"*). Lifetime duration of lactation was calculated as the sum of lactation duration for all live births and categorised into five levels (none, 1--6, 7--12, 13--23, and ≥24 months).
Clinical measurements
---------------------
Height was measured without shoes to the nearest 1.0 cm and weight wearing light clothing to the nearest 0.5 kg. Body mass index was calculated as weight (kg) divided by the squared value of height (m^2^). Waist circumferences were measured with a flexible steel band with the participants standing upright, and the numbers were rounded to the nearest 1.0 cm. Waist circumference was measured horizontally at the height of the umbilicus \[[@B15]\].
Blood pressure was measured by specially trained nurses or technicians with oscillometric Dinamap 845 XT (® Critikon, Tampa, FL) after adjustment of the cuff size according to the arm circumference. After an initial two minutes' rest, the blood pressure was automatically measured three times at intervals of one minute. In this study, we used the mean value of the second and third measurement of systolic and diastolic blood pressure.
The blood sample (non-fasting) drawn from all participants was centrifuged at the screening station and, on the same day, transported in a cooler to the laboratory. Serum lipids were analysed at the Central Laboratory, Levanger Hospital, Nord-Trøndelag Hospital Trust, using a Hitachi 911 Autoanalyser (Hitachi, Mito, Japan), applying reagents from Boehringer Mannheim (Mannheim, Germany). Total serum cholesterol, high density lipoprotein (HDL) cholesterol and triglycerides were measured by an enzymatic colorimetric method, and HDL cholesterol was measured after precipitation with phosphotungstate and magnesium ions. Glucose was measured using an enzymatic hexokinase method. The day-to-day coefficients of variation were 1.3-1.9% for total cholesterol, 2.4% for HDL-cholesterol, 0.7-1.3% for triglycerides and 1.3-2.0% for glucose.
Low density lipoprotein (LDL) cholesterol was calculated using the Friedewald formula: LDL cholesterol = total serum cholesterol -- HDL cholesterol -- one-fifth of the triglyceride concentration \[[@B16]\]. LDL was only calculated in participants with triglyceride concentrations lower than 4.5 mmol/L.
Analyses
--------
In order to examine whether the effects of lactation duration were modified by age, we conducted analysis stratified by age (≤ 50 years and \> 50 years) and also tested for statistical interaction by including a product term of lactation duration and age ±50 years in the regression model. We used a general linear model to calculate mean values of body mass index, waist circumference, systolic and diastolic blood pressure, serum lipid levels and blood glucose levels in five categories of lactation, and to estimate adjusted mean difference with 95% confidence intervals (CI) between the categories. Previous studies have shown beneficial maternal metabolic effects with increasing duration of lactation \[[@B6],[@B11],[@B17]\]. Hence, lifetime duration of lactation for ≥24 months, the category with the longest lactation duration, was used as a reference category, as this was assumed to be the most beneficial lactation duration. Lipid and glucose concentrations were log-transformed due to non-normal distribution, and hence we calculated geometric means and crude and adjusted differences in percent between the categories for each lipid and for glucose. *P*-values for linear trend were calculated *first* across the five categories of lactation duration and then across four categories of lactation duration, excluding the 'never lactated' group, by treating the categories as an ordinal variable in the regression model. All associations were adjusted for potential confounding, with maternal age, education (primary school, secondary school, college/university and unknown), smoking status (current, former or never smoked), hours of physical activity per week (no activity, \<3 hours light or \<1 hour hard activity, \>3 hours light or 1 hour hard activity, \>1 hour hard activity, and unknown), marital status (unmarried, divorced, widowed and married/cohabiting) and parity (1, 2, 3 or ≥ 4 children). In analyses of serum lipids and blood glucose we also adjusted for time since last meal. In supplementary analyses, we adjusted first for time since last birth, and then for body mass index. We also did the analyses described above comparing nulliparous and parous women.
In additional analyses, we used logistic regression to estimate crude and adjusted odds ratios (ORs) with 95% CIs of hypertension (≥ 140/90 mmHg or current antihypertensive treatment), obesity (body mass index ≥ 30 kg/m^2^), and diabetes ('yes' versus 'no' to the question 'Do you have or have you had diabetes?' or blood glucose ≥ 11.1 mmol/L) associated with five categories of lactation duration. We also did corresponding analyses comparing nulliparous and parous women.
All statistical tests were two-sided, and all analyses were performed using SPSS for Windows (version 16, SPSS Inc., Chicago; IL, USA).
Ethical approval
----------------
The study was approved by the Norwegian Regional Committees for Medical and Health Research Ethics and by the Norwegian Data Inspectorate. Informed consent was given by all participants in the HUNT-study.
Results
=======
The parous women had a mean age of 50.4 years when attending HUNT2, they reported a median parity of two live births, a median lifetime duration of lactation of 13 months and time since the last delivery was on average 21.1 years (data not shown). The majority (96.7%) of the 21,368 parous women had breastfed one or more children, and approximately one in five women (21.4%) reported a lifetime duration of lactation longer than 24 months. Across the lactation categories, there were significant differences in maternal age, educational level, smoking status, the level of physical activity, marital status and parity in our study (Table [1](#T1){ref-type="table"}). Furthermore, the variables that we a priori considered as confounders, namely: educational level, smoking status, level of physical activity, marital status and parity, were all associated with the outcome variables in our study (data not shown).
######
Characteristics of women in the HUNT2-study, Norway, 1995--97 (N = 26,388)
**Nulliparous women** **Parous women**
------------------------------------------------ ----------------------- --------------------------------------------- ---------------------- ----------------------- ------------------------ --------------------
**Lifetime duration of lactation (months)**
**Variables**^**a**^ **n = 5,020** **0 n = 705** **1 -- 6 n = 4,421** **7 -- 12 n = 5,401** **13 -- 23 n = 6,266** **≥ 24 n = 4,575**
Age, yrs, mean (SD) 44.7 (21.6) 50.7 (13.7) 48.4 (14.5) 49.3 (14.9) 50.1 (14.8) 53.8 (15.7)
Age at delivery of first child, yrs, mean (SD) \- 24.2 (5.1) 23.3 (4.4) 23.3 (4.2) 23.4 (3.9) 23.2 (3.7)
Parity,
Para 1 (%) \- 29.9 30.8 17.2 2.5 0.4
Para 2 (%) \- 38.3 45.1 44.0 45.1 12.9
Para 3 (%) \- 21.7 17.4 26.7 35.0 41.2
Para 4 or greater (%) \- 10.1 6.8 12.0 17.5 45.5
University/college education (%)^b^ 26.4 13.2 13.4 18.0 21.7 22.4
Never smoked (%) 62.2 37.3 36.7 45.7 50.9 60.5
High physical activity^c,d^ (%) 28.7 14.6 17.6 18.2 20.9 18.2
Unmarried/divorced (%) 61.6 24.4 28.7 23.2 16.5 11.1
Hypertension^e^ (%) 37.5 47.2 36.3 37.5 37.5 44.9
Obesity^f^ (%) 17.0 27.9 18.7 18.0 16.3 21.2
Diabetes (%)^g^ 3.0 4.4 1.9 2.0 2.4 3.7
Abbreviation: SD, standard deviation.
^a^Continuous variables presented as mean (SD), categorical variables presented as % in each category of lactation.
^b^ Number of women in the category unknown: n = 823.
^c^ High physical activity defined as ≥ 1 hour hard physical activity per week.
^d^ Number of women in the category unknown: n = 2,059.
^e^Hypertension defined as ≥ 140/90 mmHg or current antihypertensive medication.
^f^ Obesity defined as body mass index ≥ 30 kg/m^2^.
^g^Diabetes defined as blood glucose ≥ 11.1 mmol/L or self reported diabetes in the questionnaire.
There was evidence of statistical interaction between age at participation (± 50 years) and lactation duration for several of the outcome variables under study (*P*-values from interaction tests; \< 0.001 for BMI; \< 0.001 for waist circumference; 0.043 for systolic blood pressure; \< 0.001 for triglycerides; \< 0.001 for total cholesterol, and 0.011 for HDL-cholesterol). Thus, the remainder of the analyses were stratified by age. Overall, there was an inverse association between lifetime duration of lactation and both body mass index (*P*-trend, \< 0.001) and waist circumference (*P*-trend = 0.01) among women 50 years of age or younger (Table [2](#T2){ref-type="table"}). After adjusting for potential confounders, women 50 years of age or younger who reported no lactation had a body mass index that was 2.5 kg/m^2^ (95% CI 2.0, 3.0) higher and a waist circumference that was 5.3 cm (95% CI 4.2, 6.5) wider than the reference group of women who had lactated ≥ 24 months (Table [2](#T2){ref-type="table"}). Adjusting for time since last birth did not change these estimates. Among women older than 50 years, there were no significant relations between duration of lactation and body mass index. However, women older than 50 years who reported no lactation had a waist circumference that was 1.5 cm wider than the reference group who had lactated ≥ 24 months.
######
Body mass index and waist circumference in nulliparous and parous women (n = 26,388)
***Women ≤ 50 years of age (n = 14,677)*** ***Women \> 50 years of age (n = 11,711)***
-------------------------------- -------------------------------------------- --------------------------------------------- --------- ------------ ------- ------ --------- ------------
**BMI (kg/m**^**2**^**)**
Nulliparous women 3,011 24.8 25.8 25.5, 26.1 2,009 27.3 26.9 26.6, 27.1
Parous women 11,666 25.4 25.6 25.3, 25.9 9,702 27.5 27.3 27.1, 27.5
*P-*value^b^ 0.026 \<0.001
*Months of lifetime lactation*
Never 360 27.5 27.6 27.1, 28.1 345 27.7 27.7 27.2, 28.3
1--6 2,587 25.7 26.1 25.8, 26.5 1,834 27.1 27.2 26.9, 27.6
7--12 3,038 25.4 25.7 25.4, 26.0 2,363 27.2 27.2 26.9, 27.5
13--23 3,513 25.1 25.3 25.0, 25.6 2,753 27.4 27.3 27.0, 27.6
≥ 24 2,168 25.1 25.1 24.8, 25.5 2,407 28.2 27.6 27.2, 27.9
*P*- trend^c^ \<0.001 0.542
*P*- trend^d^ \<0.001 0.675
**Waist cirmcumference (cm)**
Nulliparous women 3,012 76.1 79.2 78.5, 80.0 2,009 85.3 84.2 83.6, 84.7
Parous women 11,919 78.8 79.3 78.6, 80.0 9,702 85.0 85.0 84.5, 85.5
*P-*value^b^ 0.723 0.004
*Months of lifetime lactation*
Never 360 83.4 83.9 82.6, 85.2 345 86.3 86.9 85.7, 88.2
1--6 2,587 79.5 80.5 79.7, 81.4 1,834 84.3 85.1 84.3, 85.9
7--12 3,038 78.8 79.7 78.9, 80.5 2,363 84.3 84.7 83.9, 85.5
13--23 3,513 78.1 78.9 78.1, 79.8 2,753 84.4 84.6 83.8, 85.4
≥ 23 2,168 78.4 78.6 77.7, 79.4 2,407 86.8 85.4 84.5, 86.2
*P*- trend^c^ \<0.001 0.03
*P*- trend^d^ \<0.001 0.085
Abbreviations: No, number; CI, confidence interval; BMI, body mass index.
^a^ For nulliparous women: Adjusted for maternal age, smoking status, physical activity, education and marital status. For parous women: Adjusted for maternal age, smoking status, physical activity, education, marital status and parity.
^b^*P*-value between nulliparous vs parous women. Adjusted for maternal age, smoking status, physical activity, education and marital status.
^c^*P*-trend across all five categories of lifetime lactation duration, including the category "never".
^d^*P*-trend across four categories of lifetime lactation duration, excluding the category "never".
A similar pattern was observed in age-stratified analysis of systolic and diastolic blood pressure, shown in Figure [1](#F1){ref-type="fig"}. In multi-adjusted analysis, women 50 years of age or younger who had never lactated had 4.9 mmHg (95% CI 3.2, 6.6) higher systolic blood pressure and 2.9 mmHg (95% CI 1.8, 4.1*)* higher diastolic blood pressure than women who had lactated ≥ 24 months (both *P*-trends, \< 0.001). Among women older than 50 years, there were no significant relationships between duration of lactation and systolic or diastolic blood pressure. Additional adjustment for body mass index and time since last birth attenuated the estimates of both systolic and diastolic blood pressure among women 50 years or younger, whereas the estimates among women older than 50 years remained largely similar (data not shown).
![**Adjusted mean systolic and diastolic blood pressure (with 95%CI) in nulliparous and parous women.** For nulliparous women (n = 4,338): Adjusted for age, smoking, physical activity, education and marital status. For parous women (n =18,307): Adjusted for age, smoking, physical activity, education, marital status and parity. (Number of women in each category of lifetime duration of breastfeeding among women ≤ 50 years of age n = 11,150): 0 months: n = 335, 1--6 months: n = 2,455, 7--12 months: n = 2,893, 13--23 months: n = 3,380, 24+ months: n = 2,087. Number of women in each category of lifetime duration of breastfeeding among women \> 50 years of age (n = 7,157): 0 months: n = 251, 1--6 months: n = 1,429, 7--12 months: n = 1,770, 13--23 months: n = 2,057, 24+ months: n = 1,650).](1746-4358-7-8-1){#F1}
The analysis of lifetime duration of lactation and levels of triglycerides, total cholesterol and LDL-cholesterol also showed an inverse pattern and an apparent dose--response relationship (all *P*-trends, \< 0.001) among women 50 years or younger, whereas among women older than 50 years no significant associations were found. In analyses of log-transformed lipid values, women 50 years or younger who had never lactated had 17% (95% CI 11, 24) higher triglyceride levels, 5% (95% CI 3, 7) higher total cholesterol levels and 8% (95% CI 4, 11) higher LDL-cholesterol levels than women who had lactated ≥ 24 months (Table [3](#T3){ref-type="table"}). Additional adjustments for time since last birth did not change these estimates, whereas additional adjustments for body mass index attenuated the estimates (data not shown). The results for HDL cholesterol showed a somewhat different pattern. Women 50 years or younger who had never lactated had 4% (95% CI 1, 6) lower HDL cholesterol levels than women who had lactated ≥ 24 months (*P*-trend, 0.008). Additional adjustments for time since last birth did not change these estimates, whereas no associations remained after adjustments for body mass index. As in the analyses of the other serum lipids, there were no significant associations among women older than 50 years.
######
Triglycerides, total-, HDL- and LDL-cholesterol and blood glucose (log-transformed) in nulliparous and parous women (n = 26,388)
***Women ≤ 50 years of age (n = 14,677)*** ***Women \> 50 years of age (n = 11,711)***
---------------------------------------- -------------------------------------------- --------------------------------------------- --------- ------------ ------- ------ --------- ------------
**Triglycerides (mmol/L)**
Nulliparous women 3,011 1.09 1.21 1.17, 1.26 2,009 1.66 1.58 1.54, 1.62
Parous women 11,666 1.14 1.16 1.12, 1.20 9,702 1.63 1.63 1.59, 1.66
*P-*value^b^ \<0.001 0.02
*Months of lifetime lactation*
Never 360 1.34 1.30 1.22, 1.38 345 1.64 1.70 1.61, 1.80
1--6 2,587 1.22 1.22 1.17, 1.27 1,834 1.55 1.61 1.55, 1.67
7--12 3,038 1.15 1.17 1.13, 1.22 2,363 1.59 1.63 1.57, 1.68
13--23 3,513 1.11 1.14 1.10, 1.19 2,753 1.64 1.65 1.59, 1.70
≥ 24 2,168 1.08 1.11 1.07, 1.16 2,407 1.70 1.63 1.57, 1.69
*P*- trend^c^ \<0.001 0.890
*P*- trend^d^ \<0.001 0.462
**Cholesterol (mmol/L)**
Nulliparous women 3,011 4.95 5.35 5.27, 5.42 2 009 6.73 6.56 6.50, 6.63
Parous women 11,666 5.27 5.19 5.13, 5.25 9 702 6.58 6.50 6.45, 6.55
*P-*value^b^ \<0.001 0.06
*Months of lifetime lactation*
Never 360 5.56 5.47 5.35, 5.59 345 6.48 6.47 6.33, 6.61
1--6 2,587 5.38 5.37 5.29, 5.45 1,834 6.52 6.51 6.42, 6.59
7--12 3,038 5.28 5.31 5.23, 5.39 2,363 6.54 6.52 6.43, 6.60
13--23 3,513 5.20 5.23 5.16, 5.31 2,753 6.61 6.58 6.49, 6.66
≥ 24 2,168 5.17 5.20 5.12, 5.28 2,407 6.62 6.51 6.42, 6.60
*P*- trend^c^ \<0.001 0.362
*P*- trend^d^ \<0.001 0.552
**HDL-cholesterol (mmol/L)**
Nulliparous women 3,011 1.47 1.48 1.46, 1.51 2 009 1.47 1.50 1.48, 1.52
Parous women 11,666 1.44 1.42 1.39, 1.44 9 702 1.46 1.45 1.43, 1.46
*P-*value^b^ \<0.001 \<0.001
*Months of lifetime lactation*
Never 360 1.39 1.38 1.34. 1.43 345 1.46 1.44 1.39. 1.48
1--6 2,587 1.41 1.41 1.39. 1.44 1,834 1.48 1.45 1.42. 1.48
7--12 3,038 1.44 1.43 1.40. 1.46 2,363 1.47 1.45 1.42. 1.48
13--23 3,513 1.45 1.44 1.41. 1.47 2,753 1.46 1.45 1.42. 1.48
≥ 24 2,168 1.45 1.43 1.40. 1.47 2,407 1.41 1.43 1.40. 1.46
*P*- trend^c^ 0.008 0.362
*P*- trend^d^ 0.06 0.265
**LDL-cholesterol**^**e**^**(mmol/L)**
Nulliparous women 2,999 3.15 3.49 3.42, 3.56 1 943 4.76 4.58 4.52, 4.65
Parous women 11,576 3.48 3.43 3.37, 3.49 9 471 4.64 4.57 4.52, 4.62
*P-*value^b^ 0.006 0.661
*Months of lifetime lactation*
Never 353 3.77 3.70 3.58, 3.83 337 4.55 4.55 4.42, 4.69
1--6 2,564 3.60 3.60 3.52, 3.68 1,798 4.57 4.57 4.49, 4.66
7--12 3,021 3.49 3.53 3.46, 3.61 2,316 4.58 4.58 4.50, 4.67
13--23 3,482 3.41 3.45 3.38, 3.53 2,677 4.64 4.64 4.55, 4.72
≥ 24 2,156 3.40 3.44 3.36, 3.52 2,343 4.58 4.58 4.49, 4.67
*P*- trend^c^ \<0.001 0.396
*P*- trend^d^ \<0.001 0.509
**Glucose (mmol/L)**
Nulliparous women 3,011 4.93 5.10 5.04, 5.16 2,009 5.61 5.52 5.46, 5.59
Parous women 11,666 5.02 4.99 4.94, 5.04 9,702 5.53 5.57 5.52, 5.62
*P-*value^b^ \<0.001 0.183
*Months of lifetime lactation*
Never 360 5.15 5.10 5.00, 5.20 345 5.51 5.62 5.48, 5.76
1--6 2,587 5.04 5.02 4.96, 5.09 1,834 5.43 5.53 5.44, 5.62
7--12 3,038 5.01 5.00 4.94,5.06 2,363 5.51 5.58 5.49, 5.66
13--23 3,513 5.02 5.00 4.94, 5.07 2,753 5.54 5.59 5.50, 5.67
≥ 24 2,168 5.01 4.99 4.92, 5.06 2,407 5.60 5.57 5.48, 5.66
*P*- trend^c^ 0.06 0.587
*P*- trend^d^ 0.232 0.268
Abbreviations: No. number; CI, confidence interval; HDL, high density lipoprotein; LDL, low density lipoprotein.
^a^ For nulliparous women: Adjusted for maternal age, smoking status, physical activity, education, marital status and time since last meal. For parous women: Adjusted for maternal age, smoking status, physical activity, education, marital status, time since last meal and parity.
^b^*P*-value between nulliparous vs parous. Adjusted for maternal age, smoking status, physical activity, education, marital status and time since last meal.
^c^*P*-trend across all five categories of lifetime lactation duration, including the category "never".
^d^*P*-trend across four categories of lifetime lactation duration, excluding the category "never".
^e^LDL was calculated only if serum triglycerides were lower than 4.5 mmol/L.
Contrary to blood pressure and serum lipids, there was no statistically significant linear trend across the lactation categories in blood glucose levels (Table [3](#T3){ref-type="table"}) in either of the two age groups. However, in women 50 years or younger there was weak evidence of a dose-dependent association (*P*-trend, = 0.06), but no associations remained after additional adjustments for body mass index (data not shown).
Of the total study sample of parous women (n = 21,368), 39.0% of the women had hypertension, 18.7% were obese and 2.5% had known diabetes (Table [4](#T4){ref-type="table"}). The corresponding prevalences among nulliparous women (n = 5,020), were 37.5%, 17.0% and 3.0%, respectively. Among women 50 years or younger, lifetime duration of lactation was inversely associated with the prevalence of hypertension (*P*-trend, \< 0.001), obesity (*P*-trend, \< 0.001) and diabetes (*P*-trend, = 0.004). Parous women 50 years or younger who had never lactated had an almost doubled risk for hypertension, more than three times the risk of obesity and more than five times the risk of diabetes compared to women in the reference group who had lactated for ≥ 24 months (Table [4](#T4){ref-type="table"}). Among women older than 50 years no associations were found. Adjusting for time since last birth did not change these estimates, whereas adjusting for body mass index attenuated the estimates of both risk for hypertension and for diabetes (data not shown).
######
Odds ratio for hypertension, obesity and diabetes in nulliparous and parous women (n = 26,388)
**Women ≤ 50 years of age (n = 14,677)** **Women \> 50 years of age (n = 11,711)**
-------------------------------- ------------------------------------------ ------------------------------------------- ------------ ------------ ------------ ------- ------------------ ------------ ------------ ------------
**Hypertension**^**b**^ No. No. hypertension No. No. hypertension
Nulliparous women 3,011 367 1.0 (Ref.) 1.0 (Ref.) 2,009 1,514 1.0 (Ref.) 1.0 (Ref.)
Parous women 11,666 2,006 1.49 0.75 0.65, 0.89 9,702 6,331 0.61 0.88 0.77, 1.00
*P*-value \<0.001 0.05
*Months of lifetime lactation*
Never 360 98 2.08 1.88 1.41, 2.51 345 110 0.85 1.26 0.96, 1.65
1--6 2,587 489 1.29 1.24 1.03, 1.49 1,834 719 0.61 0.88 0.75, 1.02
7--12 3,038 528 1.17 1.16 0.98, 1.37 2,363 864 0.69 0.93 0.80, 1.07
13--23 3,513 560 1.05 1.03 0.88, 1.21 2,753 996 0.70 0.89 0.78, 1.01
≥ 24 2,168 331 1.0 (Ref.) 1.0 (Ref.) 2,407 682 1.0 (Ref.) 1.0 (Ref.)
*P*- trend^c^ \<0.001 0.944
*P*- trend^d^ 0.009 0.218
**Obesity**^**e**^ No. No. obesity No. No. obesity
Nulliparous women 3,011 357 1.0 (Ref.) 1.0 (Ref.) 2,009 497 1.0 (Ref.) 1.0 (Ref.)
Parous women 11,666 1,501 1.10 0.82 0.70, 0.96 9,702 2 490 1.05 1.26 1.11, 1.43
*P*-value 0.013 \<0.001
*Months of lifetime lactation*
Never 360 100 3.29 3.37 2.51, 4.51 345 97 0.87 1.17 0.89, 1.53
1--6 2,587 398 1.56 1.68 1.36, 2.06 1,834 429 0.68 0.92 0.79, 1.09
7--12 3,038 420 1.37 1.46 1.21, 1.77 2,363 552 0.68 0.88 0.76, 1.01
13--23 3,513 356 0.96 1.02 0.85, 1.23 2,753 668 0.72 0.89 0.78, 1.01
≥ 24 2,168 227 1.0 (Ref.) 1.0 (Ref.) 2,407 744 1.0 (Ref.) 1.0 (Ref.)
*P*- trend^c^ \<0.001 0.861
*P*- trend^d^ \<0.001 0.457
**Diabetes**^**f**^ No. No. diabetes No. No. diabetes
Nulliparous women 3 011 20 1.0 (Ref.) 1.0 (Ref.) 2 009 129 1.0 (Ref.) 1.0 (Ref.)
Parous women 11 666 77 0.99 0.59 0.32, 1.11 9 702 466 0.74 1.01 0.80, 1.26
*P*-value 0.102 0.957
*Months of lifetime lactation*
Never 360 10 5.60 5.87 2.25, 15.3 345 21 0.71 1.29 0.78, 2.14
1--6 2 587 18 1.37 1.49 0.63, 3.53 1 834 64 0.57 0.71 0.50, 1.01
7--12 3 038 19 1.23 1.29 0.57, 2.89 2 363 91 0.51 0.74 0.55, 1.00
13--23 3 513 19 1.07 1.06 0.48, 2.33 2 753 131 0.92 0.89 0.69, 1.16
≥ 24 2 168 11 1.0 (Ref.) 1.0 (Ref.) 2 407 159 1.0 (Ref.) 1.0 (Ref.)
*P*- trend^c^ 0.004 0.202
*P*- trend^d^ 0.292 0.014
Abbreviations: No, number; OR, odds ratio; CI, confidence interval, Ref., reference category.
^a^ Adjusted for maternal age, smoking status, physical activity, education and marital status. For parous women: maternal age, smoking status, physical activity, education, marital status and parity.
^b^Hypertension defined as ≥ 140/90 mmHg or current antihypertensive treatment.
^c^*P*-trend across all five categories of lifetime lactation duration, including the category "never".
^d^*P*-trend across four categories of lifetime lactation duration, excluding the category "never".
^e^Obesity defined as BMI ≥ 30 kg/m.
^f^Diabetes defined as blood glucose ≥ 11.1 mmol/L or self reported diabetes in the questionnaire.
Discussion
==========
In this large population-based study, we found that prolonged lifetime lactation was associated with a more favourable cardiovascular risk profile among women 50 years or younger. Parous women ≤ 50 years who had never lactated were more likely to have developed hypertension, obesity and diabetes than women who had the longest lactation duration. Furthermore, there were strong indications of a dose--response association between the total duration of lactation and a favourable cardiovascular risk profile. Although the largest difference was found for women who had never lactated compared to those who had ever lactated, our analyses showed that the associations remained significant also within the lactation categories. Among women older than 50 years, only waist circumference and possibly diabetes were associated with lactation duration.
Our findings are consistent with recent studies showing that the favourable effects of lactation on maternal metabolic health persist post weaning \[[@B7],[@B8],[@B11],[@B17]-[@B20]\] and thereby further supporting the notion that lactation may induce long-term beneficial effects on maternal blood pressure, weight \[[@B21]\], diabetes \[[@B6],[@B18]\], components of the metabolic syndrome \[[@B5],[@B19]\] and cardiovascular health \[[@B7],[@B11],[@B17]\]. Previous studies have shown that the beneficial effect of lactation on cardiovascular risk factors seems to wane with time since last birth \[[@B6],[@B11],[@B18]\]. Adjusting for this period did not change the estimate in our study. On the other hand, the stronger associations observed among women aged ≤ 50 years compared to those aged \>50 years could possibly be due to the shorter period since last birth.
The present study was conducted in a large and unselected population with a wide age range and a high participation rate. Breastfeeding was common, and this observation is consistent with other studies showing that breastfeeding rates in Norway are among the highest in industrialised countries \[[@B22]\]. We therefore have a large sample size of women who have lactated. Combined with the standardised measurements of lipids, anthropometric measures and blood pressure, it provides a unique opportunity to study the association of lactation and cardiovascular risk factors and whether differences exist by duration of lactation.
However, the cross sectional study design calls for a cautious interpretation of the findings, as in all observational studies. It is possible that women who breastfeed their children have a better health status, healthier lifestyles and higher socioeconomic status than women who do not breastfeed \[[@B23]\]. Given the high rates of breastfeeding among Norwegian women, the group of women who had never lactated in our study was less than 4% of the entire sample. Thus, it is possible that the group of women who had never lactated in our study differed in major confounders than might be expected in populations where breastfeeding rates are lower.
A previous study among Norwegian women found that maternal age, education and smoking were among the most important factors associated with lactation duration \[[@B22]\]. In addition to these factors, we found significant differences across the lactation categories in the level of physical activity, marital status, and parity in our study. Although all of these factors are known to be associated with risk of cardiovascular disease, adjusting for them did not materially change the estimated associations. However, residual confounding due to unmeasured and unknown factors cannot be ruled out, such as pre-pregnancy and early postpartum health status. Women with gestational diabetes mellitus are at an increased risk of developing type 2 diabetes \[[@B24]\]. Furthermore, gestational diabetes mellitus may have a role in impacting breastfeeding initiation and success, and could thus act as a major confounder. In a recent study, longer duration of lactation was associated with lower incidence of the metabolic syndrome both among women with and without a history of gestational diabetes mellitus, and the findings were particularly striking for women who developed gestational diabetes mellitus during their pregnancy \[[@B10]\]. In our study, women reporting a diagnosis of diabetes prior to first pregnancy were excluded from our analyses. Nevertheless, the lack of data on the history of gestational diabetes mellitus during pregnancy is a limitation of our study.
Moreover, the potential for reverse causation must be considered when interpreting the results from the present study*.* Obesity \[[@B25],[@B26]\] and type 1 diabetes \[[@B27]\] have been linked to difficulties with lactation, and hence shorter lactation duration could be a marker for an already existing abnormal metabolic profile influencing whether the women lactate and for how long. Unfortunately, we did not have pre-pregnancy measurements of weight and height and could therefore not adjust for pre-pregnant body mass index. However, when we adjusted for body mass index measured at study participation in supplementary analyses, the adjustments did not change our estimated associations substantially, with the exception of HDL-cholesterol. Obesity may either precede \[[@B26]\] or follow lactation practices. Thus, one may argue that body mass index measured at study participation rather acts as an intermediate factor, and hence should not be adjusted for as a confounder in the analyses.
Diet accounts for much of the variation in coronary heart disease risk \[[@B28]\]. The HUNT2 study was not designed to measure dietary intake, and we had insufficient dietary data to adjust for dietary factors in our analyses. However, previous studies have found that the association between lactation and cardiovascular health persists even after adjustment for dietary intake \[[@B5],[@B7],[@B11],[@B17]\].
Another limitation of the study is the lack of data on lactation intensity. Higher intensity of lactation has been associated with improved fasting glucose and lower insulin levels at 6--9 weeks postpartum in a previous study \[[@B29]\]. Data on lactation intensity could therefore possibly have strengthened our estimates of associations among women with higher, and attenuated the associations among women with lower, lactation intensity. Moreover, lactation was assessed retrospectively. Nevertheless, studies have shown that maternal recall of lactation is fairly valid and reliable \[[@B30]\], even after 20 years \[[@B31]\]. However, even if misclassification should exist, it is not likely to be differential according to cardiovascular risk factors. Our observed estimates are therefore likely to be conservative.
Furthermore, selection bias could have influenced our results. However, a non-responder study showed that the most important reason for non-attending the HUNT2-study in the age group 20--69 was lack of time/moved away, while in those aged 70 years or more, immobilising and frequent follow-up by medical doctor were important reasons \[[@B32]\]. We do not believe that reasons for non-attending were unevenly distributed across the lactation categories, and we find it unlikely that selection bias would have altered the results in our study.
During pregnancy the maternal metabolism is profoundly changed, and the changes that occur could theoretically increase women's risk of metabolic disease. These changes include accumulation of adipose tissue stores \[[@B33]\], increased insulin resistance \[[@B34]\] and blood pressure, \[[@B35]\] as well as a change of the quantity and quality of circulating lipoproteins \[[@B36],[@B37]\]. By the end of the pregnancy, LDL cholesterol and triglyceride levels are two to three times higher compared with pre-pregnancy levels. In fact, some studies have shown that increasing parity may increase risk of cardiovascular disease \[[@B38],[@B39]\]. These studies do not, however, include data on lactation. Our findings of a more favourable cardiovascular risk profile associated with lactation seem to confirm the recent suggestion that lactation could affect risk of metabolic disease by facilitating a faster resetting of the maternal metabolism after pregnancy \[[@B40]\].
Lactation increases a mother's metabolic expenditure by an estimated 480 kcal/d \[[@B41]\], and although the association between lactation and postpartum weight loss so far remains inconclusive \[[@B21],[@B42]-[@B45]\], lactation could reduce cardiovascular risk by mobilising accumulated fat stores. Furthermore, lactation provides a route for physiologic excretion of large amounts of cholesterol, which could explain the more speedy return of blood lipids to pre-pregnancy levels observed in lactating mothers \[[@B3]\]. Additionally, hormonal effects, such as those of prolactin and oxytocin, may affect maternal blood pressure \[[@B46]\]. Our data among women with an average time since last pregnancy of about 21 years suggest that these favourable changes are persisting on a long term scale and are not limited to the period of lactation. Among women older than 50 years, however, we found no similar linear trend in the association between lifetime duration of lactation and cardiovascular risk factors as in younger women. Still, women \> 50 years who had never lactated had a significantly higher body mass index, wider waist circumference, higher lipid and glucose levels and higher prevalence of hypertension, obesity and diabetes compared to women who had lactated. Menopause appears to be a time of transition to increased cardiovascular risk, including adverse changes in serum lipid profile \[[@B47]\]. Hence, the cardiovascular risk alterations occurring during the menopausal transition may dilute the possible beneficial effects of lactation on maternal metabolic health as shown in previous studies \[[@B6],[@B11],[@B18]\].
Lactation may also improve insulin sensitivity and glucose tolerance. Insulin levels and insulin/glucose ratios are lower, and carbohydrate use and total energy expenditure are higher, in the lactating women compared to women who do not lactate \[[@B41]\]. Our data suggest a relation between lactation and glucose levels later in life. However, no statistically significant association with life-time duration of lactation could be found in either age group, although the association among women 50 years or younger were close to significant. In contrast, the association between lifetime duration of lactation and the prevalence of diabetes was strong and significant in the younger age group, although not among the older women, further supporting the notion that the possible effect wanes with time since last delivery. These mechanisms, together with our results, indicate that lactation helps women return to pre-pregnant metabolism more quickly post partum, which could in turn affect metabolic disease risk profile later in life.
Our results indicate that lactation may have a considerable impact on cardiovascular risk factors. The difference in systolic/diastolic blood pressure between women 50 years or younger who had never lactated and women who had lactated for 24 months or more is similar to the blood pressure-lowering effect of salt reduction (4/2 mm Hg) among normotensive individuals \[[@B48]\]. Furthermore, it has been estimated that a 10% reduction in serum cholesterol could halve the risk of ischaemic heart disease at age 40 \[[@B49]\], and hence the *5* % difference in total cholesterol levels observed between women 50 years or younger who had never lactated, and women who had lactated more than 24 months, could represent a substantial risk reduction. Also, the 17% difference in triglycerides between women 50 years or younger who had never lactated and women who had lactated 24 months or more must be added to this altered cardiovascular disease risk pattern.
Conclusions
===========
In conclusion, this large population-based study showed that lactation is associated with a more favourable cardiovascular risk profile in mothers later in life, and that the beneficial effects are most prominent among women 50 years or younger. Lactation may hence reduce the adverse pregnancy-related changes in cardiovascular risk factors, with effects lasting even beyond the childbearing years. If the observed associations are causal, lactation could have substantial potential for reducing women's risk of cardiovascular disease. Additional studies are needed to confirm the observed protective associations and their underlying mechanisms.
Competing interests
===================
The authors declare that they have no competing interests.
Authors' contributions
======================
STN conceived the idea, did the analyses and wrote the paper. KM participated in the planning of and data collection in the HUNT2 Study. SF, TILN, LFA and KM participated in the analyses, interpreted the results and wrote the paper. All authors discussed and interpreted the findings and contributed to the final paper.
Acknowledgements
================
We thank the HUNT Research Centre for providing the data and the women who participated in this study. Nord-Trøndelag Health Study (The HUNT Study) is a collaboration between HUNT Research Centre (Faculty of Medicine, Norwegian University of Science and Technology NTNU), Nord-Trøndelag County Council and The Norwegian Institute of Public Health. The study was financially supported by the Norwegian University of Science and Technology and by the Central Norway Regional Health Authority.
|
Steven (No truly it is) joined Apr 1, 2013
Admiral Steven Hackett is a top-ranking official of the Alliance Navy and commanding officer of the Fifth Fleet. He is based at Arcturus Station.
Hackett was born in Buenos Aires in 2134. When his mother died in 2146, he was placed in the Advanced Training Academy for Juveniles, where his affinity for science and leadership quickly became evident. Hackett enlisted in 2152, volunteering for high-risk missions to colonize space beyond the Sol Relay. He was commissioned as a second lieutenant in 2156 and participated in the First Contact War the following year. His rare ascent from enlisted man to admiral remains an Alliance legend.
On relatively equal political status with both Ambassador Udina and Captain Anderson, Hackett is one of the three officers who recommends Commander Shepard as the first human Spectre.
Following the victory against Sovereign, Hackett is promoted to head of the Alliance military.
No files were found matching the criteria specified. We suggest you try the file list with no filter applied, to browse all available. Add file and help us achieve our mission of showcasing the best content from all developers. Join now to share your own content, we welcome creators and consumers alike and look forward to your comments. |
A longitudinal study of respiratory health of toluene diisocyanate production workers.
A longitudinal comparison of 305 toluene diisocyanate (TDI) and 581 hydrocarbons workers employed at a Texas chemical manufacturing facility from 1971 through 1997 tested whether workplace exposure to TDI was associated with changes in any of the respiratory measures collected by the company's health surveillance program. Mean TDI exposures measured 96.9 ppb-months, or 2.3 ppb per job. At the end of the study, there were no differences in self-reported symptoms between the groups. Longitudinal analyses of symptoms and pulmonary function showed no correlation with TDI exposure, yielding an average annual decrease in forced expiratory volume at 1 second of 30 mL per year. We concluded that exposure to TDI at workplace concentrations was not associated with respiratory illnesses in this cohort, and consistent with other recent research, it seemed not to accelerate the normal age-related decline in pulmonary function. |
Using a quality framework to assess rural palliative care.
High-quality palliative care may remain out of reach for rural people who are dying. The purpose of this study was to explore the opportunities and issues affecting the provision of high-quality palliative care from the perspective of nurses employed in two rural health regions. Using an interpretive descriptive design, focus groups and in-depth individual interviews of 44 nurses were conducted. Descriptions of challenges and opportunities fell into three themes: effectiveness and safety, patient-centredness, and efficiency and timeliness. Patient-centredness was seen as a major strength of rural palliative care. Major challenges included provision of adequate symptom management and support of home deaths. The scarcity of health human resources and the negative impact these shortages had on all dimensions of palliative care quality consistently underpinned the discussions. Implementing outcome measurements related to symptom management and home deaths may be a critical foundation for enhancing the quality of rural palliative care. |
To All:
Attached please find a preliminary agenda for our upcoming Wharton Risk
Management and Decision Processes Center Advisory Committee Meeting. We look
forward to seeing all of you on June 14, 2001. Other details regarding the
meeting will be arriving shortly. In the meantime, if you have any questions
or comments, please do not hesitate to contact Kate Fang or myself.
Regards,
Theresa Convery
<<AGENDA-preliminary.doc>>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Theresa Convery
Administrative Assistant
Risk and Decision Processes Center
The Wharton School of the University of Pennsylvania
(215) 898-5688/Fax: (215) 573-2130
[email protected]
- AGENDA-preliminary.doc |
The importance of early treatment of the anaemia of chronic kidney disease.
The beneficial effects of treating the anaemia of dialysis-dependent patients with erythropoietin on the improvement of cardiac status, exercise capacity, cognitive function and quality of life are well established. Equally, if not more important is the reduction in morbidity and mortality that accompanies the treatment of anaemia with epoietin. These documented improvements in outcomes of care notwithstanding, mortality and morbidity due to cardiovascular disease (CVD) remain high in dialysis patients. Recent epidemiological evidence indicates that: (i) the prevalence of CVD is very high in patients at the start of dialysis; (ii) pre-existing CVD is the major risk factor for mortality and morbidity on dialysis; (iii) CVD begins early in the course of kidney disease, shows an inverse relationship to kidney function and increases in prevalence and severity with progression of kidney disease; and (iv) corrective measures, which take 3-5 years to show a favourable effect, must be instituted well before the initiation of dialysis. Hypertension and anaemia, which develop in the course of progressive reduction in kidney function, are the principal risk factors for the prevalence of left ventricular hypertrophy (LVH) in those with chronic kidney disease, and their treatment has been shown to arrest or reverse LVH in these individuals. Whereas the treatment of hypertension early in the course of kidney disease has been incorporated into clinical practice, there has been reluctance in the treatment of anaemia because of the possibility of worsening kidney function with epoietin, as shown in rats. There is now convincing evidence that epoietin has no potential adverse effect on kidney function in humans. While the most compelling reason for the early treatment of the anaemia of kidney disease is its beneficial effect on cardiovascular function, other documented potential benefits are improvements in exercise capacity, cognitive function and quality of life. |
Q:
Moving DNS to a new provider; their servers don't respond but they say it's OK?
Currently I have a domain and rent a dedicated server for it. It runs a windows port of BIND (installed by/maintained by Plesk) and provides DNS relevant to my domain
I'm looking to wind down that dedicated server and have purchased another one (from OVH, as it happens) onto which a lot of my services have moved, but I haven't installed any DNS daemon on the new server. Instead I've created a zone for my domain on OVH's DNS servers via their control panel
Their support tell me that the last step I need to execute to switch everything over is to nominate their DNS servers as responsible for my domain, on the config pages of the registrar
The reason I haven't done so so far is that I cannot get their DNS servers to answer any queries! If I do this on my OVH server (note: ns108 is the DNS server that they have allocated to my account):
c:\> nslookup
> server ns108.ovh.net
Default Server: ns108.ovh.net
Addresses: 2001:41d0:1:1998::1
213.251.128.152
> google.com
*** ns108.ovh.net can't find google.com: No response from server
> server 213.251.128.152
Default Server: [213.251.128.152]
Address: 213.251.128.152
> google.com
*** [213.251.128.152] can't find google.com: Query refused
As noted their techs say "just switch it over, it'll all work fine", but it doesn't seem like much of a reassurance. If doesn't matter what domain I put into the query; the response is the same.
Is there a technical reason why their servers won't start responding until I nominate them as responsible for my domain, with the registrar? i.e. is what the techs telling me true, that I can switch and not worry? I don't really want to bring down every site we operate as a result of no-one being able to look up our main domain any more - that would be bad for business..
edit: Update:
I really struggled with inconsistent behaviour of nslookup here - the solutions below advised me to specify the OVH DNS server on the command line - I thought I HAD specified it (in the way I was using nslookup, by issuing a server dns108.ovh.net command after running nslookup) but it never worked out. I've since determined that it does work when specified in interactive mode, if I use the IPv4 IP of the server rather than its name. I can only assume it's because specifying by name in interactive mode causes the lookup of the server to return an IPv6 address (first) and I suspect this is the one being used, as my system isn't configured for IPv6
Working:
c:\> nslookup mail.mydomain.com dns108.ovh.net
c:\> nslookup mail.mydomain.com 213.251.188.152
c:\> nslookup
>server 213.251.188.152
>mail.mydomain.com
Not working:
c:\> nslookup
>server dns108.ovh.net
Default Server: dns108.ovh.net
Addresses: 2001:41d0:1:4a98::1
213.251.188.152
>mail.mydomain.com
c:\>ping 2001:41d0:1:1998::1
Pinging 2001:41d0:1:1998::1 with 32 bytes of data:
PING: transmit failed. General failure.
Thanks to all who helped me get to the bottom of this.. Apologies for the misdirection earlier, putting google.com into the example commands - i was using my actual domain, but also trying google and other common names for comparison. My understanding of DNS is now much improved!
A:
You simply don't (yet) understand the difference between authoritative and recursive DNS servers. Therefore, your testing is based on wrong assumptions. Corrected testing at the end of my answer.
In your OVH server, you use cdns.ovh.net as your recursive DNS server. It resolves ANY domain for you, when querying from your OVH server. That's usually preconfigured during installation, so you don't need to change anything. Also, cdns.ovh.net has nothing to do with your domain.
Instead, see OVH's New DNS servers guide:
Attention! : Since February 2007, OVH has implemented a series of new
shared DNS servers. To check on which one your DNS domain is hosted,
you must go to the OVH Manager, click on the field "Domain & DNS",
"DNS Zone" option, and watch the 2 fields of the type NS (like
dnsXX.ovh.net and nsXX.ovh.net); so, for a newly created domain, here
are the recommended configurations.
The site lists nameservers for different type of services (shared hosting, webhosting, dedicated server), but despite you have a dedicated server, you seem to have chosen to use their DNS servers instead, so what's said on that page doesn't directly apply to you.
You should go with what the OVH Manager and OVH support says. However, based on the pattern on that page I could guess that this server has a pair, and you should always specify at least two nameservers at the registar:
Primary DNS: dns108.ovh.net
IP : 213.251.188.152
Secondary DNS: ns108.ovh.net
IP : 213.251.128.152
First test that both these name servers answers authoritatively for your domain. You can do this from your own local computer, as this has nothing to do with your OVH server and serves the whole world, anyway. Here, the example.com represents your domain:
nslookup example.com dns108.ovh.net
nslookup example.com ns108.ovh.net
The servers are ready to be nominated if both tests passes both these conditions:
Both servers answers for your domain and gives expected IP address.
The answers are authoritative i.e. in nslookup there's no line Non-authoritative answer. Here, the b.iana-servers.net is authoritative for example.com but the cdns.ovh.net isn't:
$ nslookup example.com b.iana-servers.net
Server: b.iana-servers.net
Address: 2001:500:8d::53#53
Name: example.com
Address: 93.184.216.34
$ nslookup example.com cdns.ovh.net
Server: cdns.ovh.net
Address: 2001:41d0:3:163::1#53
Non-authoritative answer:
Name: example.com
Address: 93.184.216.34
Hint: if OVH says the DNS for your domain is working, there's no reason to believe the tests wouldn't pass. They know what they are doing. But this is what you can do if you are still really suspicious.
A:
I’d imagine that their name server has been configured to not respond to recursive queries and that the name server should only answer requests for domain names that it is authoritative for. The second response replies with Query refused as would be expected for a non-recursive name server.
I see that you have created a zone for your domain on their DNS servers via their control panel. You should try using nslookup with their name server (as you’ve shown) to query your own domain name – not google.com.
nslookup yourdomain.com ns108.ovh.net
If that works, you can then go to your registrar’s interface to set their name servers as the authoritative name servers for your domain.
|
Tag Archives: Speakup Oklahoma
A few years back, the Obama Administration create the White House Petition website in which citizens could introduce issues they would like to see the President address. Other people could then sign those petitions to add their support and ensure that the President responded to them. That site has become a valuable tool for allowing voters to express their desire for change in government, even if it has not had the desired effect of changing the President’s opinions on certain matters.
This month, the Oklahoma Legislature is following in that arena and has created its very own public forum for changing the direction of politics and public opinion in this very state. This site, Speakup Oklahoma, already has a lot of topics on it ranging from campaign finance reform, the pay of public employees, hemp and marijuana legalization, to school choice. So it is only proper for us to present our very own proposal for easing Oklahoma’s strict ballot access laws.
Currently, Oklahoma is one of the toughest states to form a new political party in the U.S. These laws have created a drought of ideas in our political landscape. We need to reduce the petition requirement for forming a new party from the current 5% of the last general election to the flat 5,000 signatures parties needed prior to the change in 1974.
We would like to see a massive outpouring of support for this proposal. Signing up for the site is incredibly easy. You can create an account using Facebook or fill out a simple form on the site. Once you do that, simply go to the Ballot Access Reform topic and vote for it simply by clicking the cote counter. If you feel inclined you can also leave a comment in support. I want to see this topic reach the top of the vote count pile. So, get on it.
Post navigation
Search
The Case For Ballot Access Reform
Oklahomans for Ballot Access Reform is once again calling on state lawmakers to demonstrate their faith in democracy and hand the keys to the electoral process back over to the voting public. To this end, we have written and published a brief putting forth the evidence in support of Ballot Access Reform.
Read and Share the Press Release and Ballot Access Brief. |
In a way it was a fitting ending for UTSA.
After a season of narrow defeats, the Roadrunners’ final close call was the hardest of all to swallow.
They waited most of the afternoon on Selection Sunday only to learn they were one of three eligible teams in the nation not receiving a bowl bid. UTSA was left out despite qualifying with six wins.
“At one point we had an opportunity where our destiny was in our own hands,” Coach Frank Wilson said. “We were sitting at five wins with four games remaining. You would think we would be able to do so. I would have bet my life on it. We put our destiny in someone else’s hands. This was the result.”
Conference USA wound up getting a record nine of 10 eligible teams in to bowls. UTSA (6-5) was the one left out, joining Buffalo (6-6) and Western Michigan (6-6) as the three bowl-eligible teams nationally not receiving bids. It came down to the wire with bowls reportedly considering UTSA choosing other teams instead.
The Gildan New Mexico Bowl, where UTSA made history a year ago by playing its first postseason game in school history, this time chose Marshall (7-5) from C-USA and Colorado State (7-5) from the Mountain West Conference.
Numerous projections had UTSA headed to the Lockheed Armed Forces Bowl in Fort Worth to play Army (8-3). But bowl officials chose San Diego State (10-2) of the Mountain West Conference.
UTSA went bowling with a 6-6 regular-season record a year ago when there weren’t enough six-win teams to fill all the bowl slots. This year, there were more bowl-eligible teams (81) than available spots (78).
After beginning the season with so much promise at 3-0, UTSA lost three of its final four games. And of the Roadrunners’ five losses, four were by single digits. The offense failed to score a touchdown in the final two games, resulting in the firing of offensive coordinator Frank Scelfo.
Wilson felt the firing had no effect on UTSA’s bowl chances.
“The thing people remember is November,” Wilson said. UTSA was 1-3 during the month
The players reacted to the news via Twitter.
“Man, that’s sad,” sophomore linebacker Josiah Tauaefa said. “I love my team and I hurt for the seniors.”
Said senior defensive end Marcus Davenport on not getting to play one more time: “So many things changing today. The band is breaking up.”
[email protected]
Twitter: @johnfwhisler |
Larry Fink: People forget the actual size of passive investing
People often like to talk about how passive investing is taking over the investing world. But BlackRock (BLK) CEO Larry Fink says this just isn’t so.
“The flows are large, but passive represents globally about 20% of the overall equity markets,” Fink told Yahoo Finance Editor-in-chief Andy Serwer in an interview at UCLA. “Today in the United States it’s 30% and elsewhere it’s 10%. It’s still not that large yet and people are blowing it way out of proportion.”
According to Fink, plenty of firms, including BlackRock, are getting positive flows into active funds. “There’s room for both” types of investing, he says, though passive (or funds that track indexes like the S&P 500) will continue to see more money unless the fees for active come down.
Traditionally, active funds have significant downside because their fees mean performance has to beat the market (or their particular benchmark) by the cost to invest make active worthwhile. But if those fees come down, Fink says, active will get some of its mojo back.
“Until many active managers lower their fees, so they can prove they can outperform [index funds] after expenses, they’ll see more inflows than the passive,” Fink said.
Part of the reason passive investing has become so popular, according to Fink, is that active managers are embracing passive investing in active portfolios. For example, an active fund could be comprised, at least partly, of active funds. “That’s another thing people are missing: You could actively manage and navigate those exposures by using passive instruments,” he said.
Misunderstandings and controversies aren’t new to passive investing or Blackrock’s relationship with it.
“When we bought BGI [Barclays Global investors, buying with it passive iShares] in 2009, most people said that was a bad transaction that nobody can have active and passive in the same organization,” he said. “We said why? Our clients have active and passive, why can’t we offer products agnostically? That’s actually worked.”
For all of these perceived “problems” that passive funds may have, the CEO of the world’s largest asset manager with over $6 trillion under management isn’t concerned.
“I don’t see a point at this time where it’s going to create a real problem,” Fink said. “And as a leading passive manager, we care a lot about this.” |
Angry anti-NSA hackers pwn Angry Birds site after GCHQ data slurp
Anti-NSA hackers defaced Rovio's official Angry Birds website on Tuesday night as a reprisal against revelations that GCHQ and the NSA were feasting on data leaked from the popular smartphone game.
Spying Birds: Angry Birds defaced by irritated hackers.
Angrybirds.com became "Spying Birds" as a result of the defacement (Zone-h mirror here). Rovio has confirmed the defacement, the International Business Times reports.
The Angrybirds.com website was back to normal by Wednesday morning. The defacement, which Zone-h has yet to confirm is genuine, must have been brief. Defacing a website is an act more akin to scrawling graffiti on a billboard put up by a company than breaking into its premises and ransacking its files.
It's unclear how the defacement was pulled off by a previously unknown hacker or defacement crew using the moniker “Anti-NSA hacker”.
"It’s not clear if Rovio’s web servers were compromised or if the hacker managed to hijack the firm’s DNS records and send visiting computers to a third party site carrying the image instead," writes security industry veteran Graham Cluley.
"Whatever the details of how the hack was perpetrated, it appears to have only been present for a few minutes and the company made its website unavailable for 90 minutes while it confirmed that its systems were now secured," he added.
Files leaked by NSA whistleblower Edward Snowden showed the NSA and GCHQ were slurping data from smartphone apps to harvest all manner of personal information from world+dog. This information includes users' locations, their political beliefs and even their sexual preferences.
Angrybirds.com was used as a "case study" in the leaked files, hence the hackers' focus on Rovio – even though a great number of smartphone apps from other developers are involved in the dragnet surveillance program.
Rovio issued a statement in the wake of the revelations stating that it "does not share data, collaborate or collude with any government spy agencies such as NSA or GCHQ anywhere in the world," stating that third-party ad networks may be to blame for the leak.
The alleged surveillance may be conducted through third party advertising networks used by millions of commercial web sites and mobile applications across all industries. If advertising networks are indeed targeted, it would appear that no internet-enabled device that visits ad-enabled web sites or uses ad-enabled applications is immune to such surveillance. Rovio does not allow any third party network to use or hand over personal end-user data from Rovio’s apps.
These arguments cut little ice with privacy activists who pointed out that whether the data leaked from Rovio or ad networks was immaterial: how the companies make their money and deliver their technology makes little difference in practice for users of their popular games because the end result is the same.
"The online advertising industry weakens Internet security, and has created pathways for hackers and gov agencies to exploit," said Christopher Soghoian, principal technologist of the speech, privacy & technology project at the American Civil Liberties Union, in an update to his personal Twitter account. ® |
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Sam Machkovech
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
Nathan Mattise
AUSTIN, Texas—Robots and research during the day, barbacoa and bands at night. South by Southwest may not deliver the product announcements of CES or the in-depth technical analysis of Google I/O or WWDC-types, but Austin's contribution to the tech calendar remains perhaps the most unique annual event on the Ars radar.
The 2018 event only emphasized this. SXSW remains the place where you may decide to check out the Westworld showrunners talking about season two when all of a sudden Elon Musk shows up, Thandie Newton (Mae) reveals her deep passion for improving life in the Congo, and everyone starts discussing how to inspire humanity. Or, you might decide to check out the much-hyped Ready Player One "Oasis" experience only to discover HTC quietly snuck its newest headset iteration in, and you can finally comfortably wear your eyeglasses while gaming.
We came across an arthouse film festival that rivals Sundance or Cannes, only it takes place in virtual reality here. There were Batmobiles, European comment section magic, rocket engine test fire, and Mark freakin' Hamill—and most of that happened before a single band took the stage.
As the event rolled along, we also took a brief gander through SXSW's gaming show floor. SXSW Gaming, which is still only a few years young, continues to have a regional, PAX-like feel. About midway through the week, suddenly anyone with a gaming itch can find local indies, playable retro games, massive merch booths, and a fair share of free-play zones for card and computer fans alike. Unlike those more established shows, however, this one also had a massive Bob Ross presence. His foundation came on site to advocate for kids' art by teaching them how to paint happy little trees.
We'll continue to have stories from Texas Hill Country in the near future (because on top of all the stuff mentioned above, we also saw a ton of films). But for now, have a look at some of our favorite sights above and below to get a sense for how eclectic (err, we should say "weird" right?) Austin remains after all these years.
Sam Machkovech
Listing image by Nathan Mattise |
Mylan and Fujifilm Kyowa Kirin Biologics said on Thursday they had won a European Commission green light to market their version of the injectable medicine, known as Hulio. They intend to launch it in Europe on or after Oct. 16, when AbbVie’s primary European patent on Humira expires.
The large number of Humira copies reflects intense rivalry for a slice of a huge market as demand for so-called biosimilars takes off in Europe, where adoption of the cut-price products has been much faster than in the United States.
SPONSORED
Europe accounted for around $4.4 billion of Humira’s global sales in the 12 months to June 30, 2018, according healthcare data consultancy IQVIA.
Amgen, Novartis’s generics wing Sandoz, South Korea’s Samsung Bioepis and Germany’s Boehringer Ingelheim have already won approval for four other biosimilars to Humira.
Humira is used to treat a range of conditions including rheumatoid arthritis, Crohn’s disease, ulcerative colitis and psoriasis.
Its commercial success and popularity among patients means it has become a major cost for health systems across Europe, and health administrators say they will waste no time in exploiting the arrival of cheaper biosimilars to drive down bills.
Because injectable biologic drugs such as Humira are made in living cells, they cannot be exact replicas of the original medicine, so regulators have come up with the notion of biosimilars – drugs that are similar enough to do the job.
The conventional wisdom has been that biosimilar uptake would be slow and price discounts modest, since these products are expensive to develop and doctors may be wary about using a medicine that isn’t identical to the original.
But Europe’s recent experience with the first wave of biosimilar antibody drugs – the biggest section of the biologic market – has upended expectations, suggesting AbbVie will face fierce competition.
Still, analysts don’t expect global Humira sales to fall off a cliff just yet, since there are delays in the arrival of biosimilars in the all-important U.S. market.
While expiry of the Humira patent opens the door to biosimilars in Europe, such copies are not expected to launch in the United States until 2023. |
Turbines have been around for a long time—windmills and water wheels are early examples. The name comes from the Latin turbo, meaning vortex, and thus the defining property of a turbine is that a fluid or gas turns the blades of a rotor, which is attached to a shaft that can perform useful work.
When he went to switch on his rotary engine again, the Le Rhone refused to pick up. Nothing happened! The propeller simply windmilled in the slip stream. Garros knew immediately what was wrong and cursed himself for his imbecility.
2004, Deborah Bedford, If I Had You:
The propeller windmilled in front of them. Creede tried to start the engine. It growled like something angry, died away. "We're ... gonna have to ... ride this thing ... to the ground."
2006, James R. Hansen, First Man: The Life of Neil A. Armstrong, page 134:
[...] the propeller blade on number-four engine windmilled in the air stream. "I wasn't too concerned about it, really," recalls Butchart. "B-29 engines are not all that dependable." |
Pokemon go
More than a year after its initial launch, Pokemon Go is often remembered for its rabid players that overwhelmed parks and swarmed streets looking for cute little pocket monsters. But people tend to forget that Pokemon Go was also many people’s first experience with augmented reality. And while today’s trainer count is down from peak numbers last summer, Pokemon Go creator Niantic says the game’s augmented reality features were noticeably improved thanks to an integration with Apple’s ARKit on iOS. Read More >>
Cast your minds back to ye olden days of summer 2016. Suicide Squad was stinking up cinemas everywhere, Portugal were about to win Euro 2016, and we were still months away from a certain Home Alone 2 cast member taking the White House. Also, tonnes of folk were playing Pokémon GO... and apparently wrecking shit while doing so. Read More >>
Pokémon Go is still a thing, even though the very large amount of the initial playerbase gave up a long time ago. It's going strongly enough that Niantic has launched a brand new competition, asking players to submit their best AR photos from in the game. Read More >>
Since Pokemon Go's launch in July last year, the Augmented Reality phenomenon has been downloaded over 750 million times and made more than £900 million. That's not a typo. While more than 80 per cent of players (myself included) haven't opened the app in months, 60 million people are still playing today. Read More >>
Today marks a year to the day since Pokémon Go took the world by storm. Initially restricted to Americans, British trainers spent the first week installing workarounds to get their hands on the game. Read More >>
Ruslan Sokolovsky, a blogger who was arrested for played Pokémon Go in a Russian church, was found guilty today of charges ranging from “violating religious feelings” to illegal possession of a pen that contained a video camera. Read More >>
Whether you love or hate the yearly parade of daft ideas and 'new products' for April Fools' Day, it's worth remembering that it falls on a Saturday this year. Which means that a lot of companies are releasing theirs today. Read More >>
It's all happening in Pokémon Go. Developer Niantic announced yesterday that an unexpected Water Festival would be kicking off at 8pm UK time, and it lasts until 8pm on the 29th. During the festival, you've got more chance of running into water 'mon like Magikarp, Squirtle, Totodile and so on – and you've got a better chance of finding Gen 2 water species in watery areas too. Read More >>
There's no doubt that some people take Pokémon Go a bit too seriously - me included - but a player in Singapore has possibly set a new record by dying of a heart attack shortly after catching a Lapras and Granbull. Read More >>
Earlier this week, Niantic added 80 new Johto-region Pokemon to the Pokemon GO lineup — and if the proliferation of lures in my local area is any indication, a lot of players are back in the game. Along with the new Pokemon, Niantic has also added a handful of evolutionary items. Here's everything you need to know. Read More >>
If you've been waiting for the second generation of Pokémon to hit PoGo, you're in luck: this week sees a huge overhaul of the app, including over 80 new 'mon and a whole load of new features. Read More >>
For those of us still playing Pokémon Go, the remaining blank spaces in the Pokédex are becoming vexing. Until the next generation of 'mon is added to the game - or they let us trade - most dedicated players have just a few empty spaces left and next to no way of filling them. Read More >> |
This is the first of a new series from MyCrypto where we shine the spotlight on one of our team members to introduce them to our community and learn a bit more about them.
Hey Michael! Thanks for taking time out of your day to answer a few questions.
Michael “blurpesec” Hahn
What is your role at MyCrypto?
My role is to provide support and education for users. On a typical day I would provide support to users on Reddit, and Twitter, as well as through email ([email protected]) on anything ranging from security issues to potential bugs or problems with sending tokens or ETH, update our knowledge base, keep my eye out for recent security trends and interact with other people on our team to provide feedback for ongoing development tasks. I also interact with our partners to increase the fluidity of their product’s integration on our website. Since I am interested in security, I also try to contribute to the security tools that we help sponsor and update (EtherAddressLookup, EtherSecurityLookup and EtherScamDB).
What initially got you into cryptocurrency?
Initially, I got into cryptocurrency because I was looking into becoming a security professional (I study Information Tech. in school) and I stumbled into an ongoing privacy debate about the the importance of individual privacy and how it has consistently been violated by governments and large tech organizations without many people even realizing. I read about a project that had the idea to create a peer-to-peer cash that would allow for some level of anonymization.
I think people often misunderstand what innovation Bitcoin provided. The true innovation that Bitcoin provides is in the form of a method for consensus. The software that everyone runs has to be agreed upon by the people running it. Otherwise, the software splits with many people using different sets of software (different blockchains). This means that for the first time ever, every user has the ability to complete monetary-based transactions and interactions with others, while also only giving up the amount of data about themselves that they’re comfortable with. If someone is uncomfortable with the level of privacy in Bitcoin, than they can use Monero which boasts a higher level of user privacy. This interest eventually lead me to Ethereum, which I think has the widest set of potential use-cases, especially where it relates to humanitarian use cases.
What did you do before you joined MyCrypto?
Before I joined MyCrypto I was a student studying IT. I’ve worked in support helpdesk-type roles as well as in a systems administrator role.
What do you love about what you do?
I love that I have the opportunity to spend my time learning and teaching others about at least one of my interests. I also love that I am able to work on projects that interest me, with people who have some similar interests.
What’s challenging about your job?
MyCrypto is a startup, and in startups, you need to be flexible. Some days I’ll work on providing support for users all day, some days I am trying to code, sometimes I am working with our team on approaches to security issues that pop up. The most challenging part about my job is trying to balance spending time on exploring my interests in my job, and spending time with my family and friends outside of work.
What’s something you’d like to learn more about?
I’m trying to get into a more software development-type role that balances security and usability. I want to build things that people will use, and I want to learn more about the cryptography and the security behind securing tools for people to use.
I want to build things that people will use, and I want to learn more about the cryptography and the security behind securing tools for people to use.
What’s one of your favorite teams/projects/cryptocurrencies in the space? Why?
My favorite project would have to be Giveth (https://giveth.io) as it is trying to open-source the giving process. Giveth has also contributed much to the Ethereum ecosystem’s smart contract usage, security, and scalability. I also really like anyone who is working to provide security for people in the ecosystem, especially when it comes to people/teams that aren’t trying to make a large for-profit venture out of it.
What’s the number one thing that you would like the cryptocurrency community to hear?
Losing your funds due to speculation is not nearly as bad as losing them due to a relatively preventable hack or phish. Be careful with your funds, or you will lose them. We track dozens of hacking/phishing campaigns every single day that are being created to exploit your inattentiveness due to security.
More About Michael
What do you enjoy doing when you’re not working?
I spend my free-time researching things that interest me: futurism, security, personal finance, technology, sociology, space, and economics. I’ve been known to spend hours watching documentaries about engineering and space, as well as biology, alternative energy, cryptography, privacy, personal finance and economics, and social movements.
Where is your favorite place you’ve lived? Where do you live now?
I’ve only ever lived in Florida, in the US, but I want to move to the Pacific NW. Maybe Oregon or Washington states.
What makes you want to live in the Pacific Northwest?
The climate is significantly better than where I’m at in Florida (dry vs wet), a different culture (older vs younger), being closer to more tech-focused cities, and forests.
What’s your highest level of non-computer-based achievement?
I’ve found a good network of friends. I’m not really sure that qualifies as an achievement. If you emulate the type of person that you want to be friends with, those types of people will naturally find their way into your life
What’s an interesting random thing about you?
I enjoy cooking, much to my girlfriend’s delight. I’m also terrified of clowns and spiders.
What’s one of your favorite dishes to cook?
I enjoy making a pork roast in the oven, but I’m also a fan of steak and asparagus over fire or charcoal.
Thanks for sharing a bit about yourself with us today, it’s great to get to know you a bit more. If someone wanted to get in touch with you or follow you, where should they go?
Message me on reddit: https://www.reddit.com/user/blurpesec/
Or follow me on twitter: https://twitter.com/blurpesec
Thanks Michael! Before we end it, share a picture of yourself doing something fun, your pet, etc.
This is my pupper, Abby. She’s an old pup, but I enjoy hanging out with her! |
Rally to Support Writers Fired at DNAinfo and Gothamist
NEW YORK, NY (November 6, 2017) – “We come not to mourn, but to organize.” That’s how Writers Guild of America East (WGAE) Executive Director, Lowell Peterson kicked off today’s rally in support of DNAinfo and Gothamist writers who were fired after voting to join the union. Right-wing billionaire-owner Joe Ricketts announced that he was shutting down the news sites just days after the writers had voted to unionize. And Ricketts was not bashful about stating that the union vote was the reason he was shutting down!
NWU joined with WGAE, SAG-AFTRA, SEIU Local 32 BJ, Public Advocate Latisha James, Comptroller Scott Stringer and almost the entire Progressive Caucus of the New York City Council. Peterson added, “Joe Ricketts needs to come clean. Did he violate the law by firing people because they exercised their right to unionize? Or did he decide the business wasn’t generating enough profit, and then try to score some ideological points by blaming his employees and their union? Either way, these journalists do work that is essential to communities across New York, and the WGAE will make sure their rights are protected and their voices are heard.” |
After two years of construction and many more years of discussion, MAX is ready for Fort Collins.
But is Fort Collins ready for MAX? We'll find out Saturday when the bus rapid transit system begins service.
After a day off – there's no Sunday service for the time being – service begins in earnest Monday.
Here's what you need to know about MAX:
So … what is it, again?
MAX is a bus rapid transit system similar to light rail but on rubber wheels. It is designed to connect the southern part of the city to downtown along the "spine" of Fort Collins west of College Avenue and parallel to the BNSF Railway tracks.
It will serve key destinations such as Old Town, Colorado State University main campus, the CSU veterinary hospital, Midtown and Foothills Mall, and the new South Transit Center.
MAX is the largest public infrastructure project in Fort Collins history. It also is the first bus rapid transit system in Colorado.
Yes: Each MAX bus has racks for three bikes. More bikes will be allowed on at the driver's discretion.
How much did the MAX system cost?
About $86.8 million with the majority of funds coming from the Federal Transit Administration. Local funding partners were the Colorado Department of Transportation, CSU, the Downtown Development Authority, and city of Fort Collins.
Myriad activities are planned Saturday to launch MAX bus rapid-transit service in Fort Collins. Here's what you need to know about first-day festivities:
• When it starts: Federal, state and city dignitaries will cut a ceremonial ribbon at 10 a.m., at the newly constructed South Transit Center, 4915 Fossil Blvd. The first northbound bus will leave the center at 11 a.m., with the first southbound bus departing the Downtown Transit Center at 11:10 a.m.
• What it costs: Celebratory events are free to attend, and MAX will be free to ride until Aug. 24. All Transfort routes will be free to ride on Saturday. After Aug. 24, regular Tranfort rates will apply to MAX service: $1.25 for adults, 60 cents for seniors, free for Colorado State University students and those younger than 18.
• How to get there: While city officials encourage the public to walk, bicycle or take Transfort to MAX, there's still room for motorists to take part. Park-and-ride lots dot the MAX route. Transfort shuttle service will be provided from 9:30-11 a.m. from the Mall Transfer Point on Stanford Road to the South Transit Center. Bicycle racks are available at every station.
• Station parties: From 11 a.m. to 2 p.m., both transit centers and the 12 MAX stations will play host to a variety show of entertainment and giveaways. The city plans six larger station parties and eight welcome stations, with ambassadors located along the route to answer questions and direct people to additional parties.
• People can pick up a "MAX Premiere Passport" and have it stamped at each station to be eligible to win a prize and receive a gift at either transit center. Those who download the Ride Transfort mobile app to a Transfort ambassador at either transit center will also receive a gift. |
GDC 2001: CMX 2002 Hands-on
Share.
Doesn't quite have the graphical panache of ATV yet, but the physics are pretty damn tight.
By IGN Staff
One of the few playable games for PlayStation 2 at the Game Developers Conference was THQ's upcoming CMX 2002. The game was obviously still pretty early, so the graphics still were a bit unimpressive, as the dirt effects and tracks didn't look to be in the same class as Rainbow Studios' ATV Offroad Fury.
However, the game showed some promise because it offered up solid control and excellent racing physics. You could actually feel your rear tire slipping on some turns and the general reactions of the bikes were right on. The game still needs a lot of work, but it holds a lot of potential. |