score
float64 4
5.34
| text
stringlengths 256
572k
| url
stringlengths 15
373
|
---|---|---|
4.03125 | In efforts to determine how Antarctica is changing–whether due to natural or human-produced climate change–scientists use satellite and radar technologies to monitor the height and thickness of the continent's ice shelves. How are global warming and sea temperature changes affecting the thickness of these massive floating ice blocks?
The height changes due to climate can be very small, perhaps only an inch or so per year. In contrast, the ocean tides that flow underneath ice shelves can push them up and down by several feet over the course of a day, and this large effect can make it difficult to measure the small climate-related changes with satellites.
Now, researchers at Scripps Institution of Oceanography at the University of California, San Diego, and Earth and Space Research of Seattle have measured Antarctic ice shelf tides from space for the first time. Through their research, the effect of tides can be removed more accurately and thus climate-related changes can be tracked more closely.
Helen Amanda Fricker of Scripps tapped information from the European Space Agency's European Remote Sensing (ERS) Satellite, which beamed radar signals to the Antarctic surface.
Every 35 days, as the satellite orbited over Antarctica, the radar signals would hit the ice shelves and bounce back to the satellite, allowing scientists to calculate how the height of the ice shelves was changing. On floating ice, surface height can be used to estimate the ice thickness.
Fricker's information was combined with calculations for Antarctic tides developed by Laurie Padman of Earth and Space Research, together setting the groundwork for a clear measurement of how the ice shelves change.
"Ice shelves are floating ice blocks, so if the ocean underneath them is warming, it will increase the melting under the ice shelves and the ice is going to get thinner," said Fricker, of the Cecil H. and Ida M. Green Institute of Geophysics and Planetary Physics at Scripps.
"Antarctic ice shelves can be sensitive areas in terms of climate change. We want to monitor their thickness and see if they're in steady-state or whether they are changing with time because of changes in climate."
Fricker said the ice shelves can play a critical role in buttressing, or holding back, ice from detaching from the Antarctic continent. Removing them, she said, may increase the flow of ice off the continent.
"As that ice melts, it will increase sea level around the world. It's important to monitor not only the grounded ice on the continent and how that's changing, but the floating ice as well," said Fricker. "To do this, we need accurate repeat measurements of ice shelf height and we have to remove the tidal signal because that will mask the true ice shelf elevation."
Fricker and Padman's analysis served as a successful "proof of concept" for upcoming studies investigating Antarctic ice shelves and climate change. The collaborative study, published in a recent issue of Geophysical Research Letters (GRL), details their analysis of eight years' worth of ERS information using radar altimeter data concentrated on the 500-mile-wide Filchner-Ronne Ice Shelf in Antarctica's Weddell Sea.
"This was a first attempt," said Padman. "Now that we have these results we are encouraged to improve our model of tides by using more sophisticated analysis techniques and combining the new data with numerical models based on the physics of ocean tides."
The next step will take the form of a new satellite called ICESat being prepared by NASA for launch later this year. A new instrument on ICESat, the Geoscience Laser Altimeter System (GLAS), will be the first to measure ice shelves using a sophisticated space-based laser instrument.
GLAS will beam laser pulses 40 times per second, from approximately 400 miles above the Earth's surface, and time each pulse to determine the surface height with an accuracy of better than six inches. Over time this will result in a determination of the surface height change with an accuracy of better than half an inch per year.
"GLAS will be the first spaceborne laser altimeter to cover Antarctica. It will have a much smaller footprint on the ground than the radar altimeter and be able to give us much more accurate measurements than ERS," said Fricker.
National Science Foundation Office of Polar Programs
Subscribe To SpaceDaily Express
Variability In West Antarctic Ice Streams Normal
University Park - Jul 26, 2002
Variability in the speed of the ice streams along the Siple Coast of West Antarctica is not an indication the ice sheet is stabilizing, but rather, that capriciousness in the ice streams, their rates and the location of the grounding line is normal and will continue to occur, according to Penn State geoscientists.
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2016 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service.| | http://www.spacedaily.com/news/antarctic-02n.html |
4.53125 | Basic Properties of Materials Downloads 929 2
In these activities students are encouraged to identify and use the vocabulary that describes the properties of objects. They will compare objects, select appropriate objects to build items, and design their own objects based on their knowledge of the properties of materials. 1. To recognize, understand and utilize the language that describes the properties of objects. 2. To compare objects based on their properties. 3. To select appropriate materials based on their properties that can be used to build items with specific purposes. 4. To design and create items using appropriate materials.
File Type: SMART Table activity pack
Date submitted: April 17, 2012
Search terms: plastic, junk, ball, hard, paper, toy, make, glass, wood, sort, thing, water proof, soft, stone, question, light, observe, bag, creative, flexible, item, metal, rough, design, bendable, smooth, create, clue, object, choose, guess, cloth, drawer, compare, strong, transparent, edible, heavy
Publisher Name: DigiXL
Note: By using any resource from this site, you are agreeing to these Terms. | http://exchange.smarttech.com/details.html?id=1d758ea1-3957-4a73-a154-d7e0f4b4b4c6 |
4.21875 | Pythagoras often receives credit for the discovery of a method for calculating the measurements of triangles, which is known as the Pythagorean theorem. However, there is some debate as to his actual contribution the theorem.Know More
Pythagoras was an Ionian Greek philosopher, mathematician and religious scholar. His greatest contribution to mathematics is the Pythagorean theorem. The theorem states that in a right triangle, the area of the square of the hypotenuse, which is the side across from the right angle, is equal to the sum of the square of the areas of the other two sides. Despite the theorem bearing his name, Pythagoras was not the first person to use this calculation. This computation was in use in Mesopotamia and India long before Pythagoras lived. There is some speculation that Pythagoras and his students are responsible for the first proof of the theorem. However, given that it was the nature of Pythagoras' students to attribute everything to their teacher, it is unclear if Pythagoras himself ever worked on the proof.
Besides mathematics, Pythagoras made contributions to religion and music. Pythagoras and his followers believed that souls did not die but went through a cycle of rebirth that ended when purity of life was obtained. Pythagoras' beliefs placed great emphasis on a lifelong search for salvation. Pythagoras might also be responsible for an understanding of string length in relation to tone in musical instruments.Learn more about Trigonometry
Evaluating sin(arc-tan x) is a simple process that involves two steps: using a right-angled triangle to label the two sides and the angle in question, which is x, and using the Pythagoras theorem to calculate the remaining side and calculating the function from these values. Writing out the expression in words is the starting point of evaluating it. In this case, it is the sine of Arc-tan x.Full Answer >
The tangent-secant theorem states that if two secant segments share an endpoint outside of a circle, the product of one segment length and the length of its external segment equals the product of the other segment's length and the length of its external segment. It is a special subtype of another mathematical theorem: the power of a point theorem.Full Answer >
Successfully working through trigonometry problems requires knowledge of the properties of triangles as well as the ability to measure and understand the ratios called sine, cosine and tangent. Using equations associated with the ratios, it is possible to find the angles and lengths of right angle triangles.Full Answer >
Simplifying trigonometric expressions is a matter of understanding the circles and triangles upon which trigonometry is based. While much of the simplification can be done geometrically, knowledge of trigonometric identities will allow an algebraic solution.Full Answer > | http://www.ask.com/math/did-pythagoras-discover-b1b4aa64c55dcdbb |
4.03125 | Huntington's disease is an inherited disease that causes the progressive breakdown (degeneration) of nerve cells in the brain. Huntington's disease has a broad impact on a person's functional abilities and usually results in movement, thinking (cognitive) and psychiatric disorders.
Most people with Huntington's disease develop signs and symptoms in their 40s or 50s, but the onset of disease may be earlier or later in life. When disease onset begins before age 20, the condition is called juvenile Huntington's disease. Earlier onset often results in a somewhat different presentation of symptoms and faster disease progression.
Medications are available to help manage the symptoms of Huntington's disease, but treatments can't prevent the physical, mental and behavioral decline associated with the condition.
Huntington's disease usually causes movement, cognitive and psychiatric disorders with a wide spectrum of signs and symptoms. Which symptoms appear first varies greatly among affected people. During the course of the disease, some disorders appear to be more dominant or have a greater effect on functional ability.
Impairments in voluntary movements — rather than the involuntary movements — may have a greater impact on a person's ability to work, perform daily activities, communicate and remain independent.
Other common psychiatric disorders include:
Other changes in mood or personality, but not necessarily specific psychiatric disorders, may include:
Symptoms of juvenile Huntington's disease
When to see a doctor
Huntington's disease is caused by an inherited defect in a single gene. Huntington's disease is an autosomal dominant disorder, which means that a person needs only one copy of the defective gene to develop the disorder.
With the exception of genes on the sex chromosomes, a person inherits two copies of every gene — one copy from each parent. A parent with a defective Huntington gene could pass along the defective copy of the gene or the healthy copy. Each child in the family, therefore, has a 50 percent chance of inheriting the gene that causes the genetic disorder.
Autosomal dominant inheritance pattern
In an autosomal dominant disorder, the mutated gene is a dominant gene located on one of the nonsex chromosomes (autosomes). You need only one mutated gene to be affected by this type of disorder. A ...
If one of your parents has Huntington's disease, you have a 50 percent chance of developing the disease. In rare cases, you may develop Huntington's disease without having a family history of the condition. Such an occurrence may be the result of a genetic mutation that happened during your father's sperm development.
After the onset of Huntington's disease, a person's functional abilities gradually worsen over time. The rate of disease progression and duration varies. The time from disease onset to death is often about 10 to 30 years. Juvenile onset usually results in death in fewer than 15 years.
The clinical depression associated with Huntington's disease may increase the risk of suicide. Some research suggests that the greater risk of suicide occurs before a diagnosis is made and in middle stages of the disease when a person has begun to lose independence.
Eventually, a person with Huntington's disease requires help with all activities of daily living and care. Late in the disease, he or she will likely be confined to a bed and unable to speak. However, a person's understanding of surroundings and interactions remain intact for a long time.
Common causes of death include:
Preparing for your appointment
If you have any signs or symptoms associated with Huntington's disease, you'll likely be referred to a neurologist after an initial visit to your family doctor.
A review of your symptoms, mental state, medical history and family medical history can all be important in the clinical assessment of a potential neurological disorder.
What you can do
You may want a family member or friend to accompany you to your appointment. This person can provide support and offer a different perspective on the effect of symptoms on your functional abilities.
What to expect from your doctor
Tests and diagnosis
A diagnosis of Huntington's disease is based primarily on your answers to questions, a general physical exam, a review of your family medical history, and neurological and psychiatric examinations.
Brain imaging and function
Genetic counseling and testing
The test won't provide information that is beneficial in determining a treatment plan.
Before undergoing such a test, consider seeing a genetic counselor, who can explain the benefits and drawbacks of learning test results.
Predictive genetic test
Some people may elect to do the test because they find it more stressful not knowing. Others may want to take the test before they make decisions about having children. Risks may include problems with insurability or future employment and the stresses of facing a fatal disease. These tests are only performed after consultation with a genetic counselor.
Treatments and drugs
No treatments can alter the course of Huntington's disease. But medications can lessen some symptoms of movement and psychiatric disorders. And multiple interventions can help a person adapt to changes in his or her abilities for a certain amount of time.
Medication management is likely to evolve over the course of the disease, depending on the overall treatment goals. Also, drugs to treat some symptoms may result in side effects that worsen other symptoms. Therefore, the treatment goals and plan will be regularly reviewed and updated.
Medications for movement disorders
Medications for psychiatric disorders
Instruction on appropriate posture and the use of supports to improve posture may help lessen the severity of some movement problems.
When the use of a walker or wheelchair is required, the physical therapist can provide instruction on appropriate use of the device and posture. Also, exercise regimens can be adapted to suit the new level of mobility.
Lifestyle and home remedies
Managing Huntington's disease is demanding on the person with the disorder, family members and other in-home caregivers. As the disease progresses, the person will become more dependent on caregivers. A number of issues will need to be addressed, and strategies to cope with them will evolve.
Eating and nutrition
Eventually, a person with Huntington's disease will need assistance with eating and drinking.
Managing cognitive and psychiatric disorders
Coping and support
A number of strategies may help people with Huntington's disease and their families cope with the challenges of the disease.
Planning for residential and end-of-life care
Creating legal documents that define end-of-life care can be beneficial to everyone. They empower the person with the disease, and they may help family members avoid conflict late in the disease progression. Your doctor can offer advice on the benefits and drawbacks of care options at a time when all choices can be carefully considered.
Matters that may need to be addressed include:
People with a known family history of Huntington's disease are understandably concerned about whether they may pass the Huntington disease gene on to their children. These people may consider genetic testing and family planning options.
If an at-risk parent is considering genetic testing, it can be helpful to meet with a genetic counselor. A genetic counselor will discuss the potential risks of a positive test result, which would indicate the parent will develop the disease. Also, couples will need to make additional choices about whether to have children or to consider alternatives, such as prenatal testing for the gene or in vitro fertilization with donor sperm or eggs.
Another option for couples is in vitro fertilization and preimplantation genetic diagnosis. In this process, eggs are removed from the ovaries and fertilized with the father's sperm in a laboratory. The embryos are tested for presence of the Huntington gene, and only those testing negative for the Huntington gene are implanted in the mother's uterus.
In vitro fertilization
With in vitro fertilization, the doctor uses a needle to remove eggs from the ovary (A). The eggs and sperm are combined in a petri dish (B) and placed in an incubator (C). If fertilization occurs, ...
Last Updated: 2011-05-05
© 1998-2016 Mayo Foundation for Medical Education and Research (MFMER). All rights reserved. A single copy of these materials may be reprinted for noncommercial personal use only. "Mayo," "Mayo Clinic," "MayoClinic.com," "Mayo Clinic Health Information," "Reliable information for a healthier life" and the triple-shield Mayo logo are trademarks of Mayo Foundation for Medical Education and Research.
Terms and conditions of use | http://riversideonline.com/health_reference/Nervous-System/DS00401.cfm |
4.3125 | Spanish verbs are one of the more complex areas of Spanish grammar. Spanish is a relatively synthetic language with a moderate to high degree of inflection, which shows up mostly in Spanish verb conjugation.
As is typical of verbs in virtually all languages, Spanish verbs express an action or a state of being of a given subject, and like verbs in most Indo-European languages, Spanish verbs undergo inflection according to the following categories:
- Tense: past, present, or future
- Number: singular or plural
- Person: first, second or third
- T–V distinction: familiar or respectful
- Mood: indicative, subjunctive, or imperative
- Aspect: perfective aspect or imperfective aspect (distinguished only in the past tense as preterite or imperfect)
- Voice: active or passive
The modern Spanish verb system has sixteen distinct complete paradigms (i.e., sets of forms for each combination of tense and mood (tense refers to when the action takes place, and mood or mode refers to the mood of the subject—e.g., certainty vs. doubt), plus one incomplete paradigm (the imperative), as well as three non-temporal forms (infinitive, gerund, and past participle).
The fourteen regular tenses are also subdivided into seven simple tenses and seven compound tenses (also known as the perfect). The seven compound tenses are formed with the auxiliary verb haber followed by the past participle. Verbs can be used in other forms, such as the present progressive, but in grammar treatises that is not usually considered a special tense but rather one of the periphrastic verbal constructions.
In Old Spanish there were two tenses (simple and compound future subjunctive) that are virtually obsolete today.
Spanish verb conjugation is divided into four categories known as moods: indicative, subjunctive, imperative, and the traditionally so-called infinitive mood (newer grammars in Spanish call it formas no personales, "non-personal forms"). This fourth category contains the three non-finite forms that every verb has: an infinitive, a gerund, and a past participle (more exactly, a passive perfect participle). The past participle can agree in number and gender just as an adjective can, giving it four possible forms.
There is also a form traditionally known as the present participle (e.g., cantante, durmiente), but this is generally considered a separate word derived from the verb, rather than an inherent inflection of the verb, because (1) not every verb has this form and (2) the way in which the meaning of the form is related to that of the verb stem is not predictable. Some present participles function mainly as nouns (typically, but not always, denoting an agent of the action, such as amante, cantante, estudiante), while others have a mainly adjectival function (abundante, dominante, sonriente), and still others can be used as either a noun or an adjective (corriente, dependiente). Unlike the gerund, the present participle takes the -s ending for agreement in the plural.
Many of the most frequently used verbs are irregular. The rest fall into one of three regular conjugations, which are classified according to whether their infinitive ends in -ar, -er, or -ir. (The vowel in the ending—a, e, or i—is called the thematic vowel.) The -ar verbs are the most numerous and the most regular; moreover, new verbs usually adopt the -ar form. The -er and -ir verbs are fewer, and they include more irregular verbs. There are also subclasses of semi-regular verbs that show vowel alternation conditioned by stress. See "Spanish irregular verbs".
See Spanish conjugation for conjugation tables of regular verbs and some irregular verbs.
- 1 Accidents of a verb
- 2 Verbal conjugations in Spanish
- 2.1 The indicative
- 2.1.1 Simple tenses (tiempos simples)
- 2.1.2 Compound tenses (tiempos compuestos)
- 2.2 The conditional
- 2.3 The imperative
- 2.4 The subjunctive
- 2.4.1 Simple tenses (tiempos simples)
- 2.4.2 Compound tenses (tiempos compuestos)
- 2.5 Continuous tenses
- 2.1 The indicative
- 3 Irregular verbs
- 4 Use of verbs
- 4.1 Contrasting simple and continuous forms
- 4.2 Contrasting the present and the future
- 4.3 Contrasting the preterite and the imperfect
- 4.4 Contrasting the preterite and the perfect
- 4.5 Contrasting the subjunctive and the imperative
- 4.6 Contrasting the present and the future subjunctive
- 4.7 Contrasting the preterite and the past anterior
- 4.8 Contrasting ser and estar
- 4.9 Contrasting haber and tener
- 4.10 Negation
- 4.11 Expressing movement
- 5 See also
- 6 References
- 7 External links
Accidents of a verb
A verbal accident is defined as one of the changes of form that a verb can undergo. Spanish verbs have five accidents. Every verb changes according to the following:
Person and number
Spanish verbs are conjugated in three persons, each having a singular and a plural form. In some varieties of Spanish, such as that of the Río de la Plata Region, a special form of the second person is used.
Because Spanish is a "pro-drop language", the subject pronoun is often omitted.
The grammatical first person refers to the speaker ("I"). The first person plural refers to the speaker together with at least one other person.
- (Yo) soy. "I am."
- (Nosotros/Nosotras) somos. "We are." The feminine form nosotras is used only when referring to a group that is composed entirely of females; otherwise, nosotros is used.
The grammatical second person refers to the addressee, the receiver of the communication ("you"). Spanish has different pronouns (and verb forms) for "you," depending on the relationship, familiar or formal, between speaker and addressee.
- (Tú) eres. "You are." Familiar singular; used when addressing someone who is of close affinity (a member of the family, a close friend, a child, a pet). Also the form used to address the deity.
- (Vos) sos. "You are." Familiar singular; generally used in the same way as tú. Its use is restricted to some areas of Hispanic America; where tú and vos are both used, vos is used to denote a closer affinity.
- (Usted) es. "You are." Formal singular; used when addressing a person respectfully, someone older, someone not known to the speaker, or someone of some social distance. Although it is a second-person pronoun, it uses third-person verb forms (and object pronouns and possessives) because it developed as a contraction of vuestra merced (literally, "your mercy" or "your grace").
- (Vosotros/Vosotras) sois. "You (all) are." Familiar plural; used when addressing people who are of close affinity (members of the family, friends, children, pets). The feminine form vosotras is used only when addressing a group composed entirely of females; otherwise, vosotros is used. Used primarily in Spain and Equatorial Guinea, though it may appear in old, formal texts from other countries, such as the Philippines, or in the initial line of the Argentine national anthem (“Oíd, mortales, el grito sagrado”).
- (Ustedes) son. "You (all) are." Formal plural where vosotros is used; both familiar and formal plural elsewhere. Where it is strictly formal, used when addressing people respectfully or addressing people of some social distance. Like usted, it uses third-person verb forms, for the same reasons.
The grammatical third person refers to a person or thing other than the speaker or the addressee.
- (Él) es. "He/it is." Used for a male person or a thing of masculine (grammatical) gender.
- (Ella) es. "She/it is." Used for a female person or a thing of feminine (grammatical) gender.
- (Ello) es. "It is." Used to refer to neuter nouns such as facts, ideas, situations, and sets of things; rarely used as an explicit subject.
- (Ellos) son. "They are." Used for a group of people or things that includes at least one person or thing of masculine (grammatical) gender.
- (Ellas) son. "They are." Used for a group of people or things that are all of feminine (grammatical) gender.
Grammatical mood is one of a set of distinctive forms that are used to signal modality. In Spanish, every verb has forms in three moods.
- Indicative mood: The indicative mood, or evidential mood, is used for factual statements and positive beliefs. The Spanish conditional, although semantically expressing the dependency of one action or proposition on another, is generally considered a tense of the indicative mood, because, syntactically, it can appear in an independent clause.
- Subjunctive mood: The subjunctive mood expresses an imagined or desired action in the past, present, or future.
- Imperative mood: The imperative mood expresses direct commands, requests, and prohibitions. In Spanish, using the imperative mood may sound blunt or even rude, so it is often used with care.
The tense of a verb indicates the time when the action occurs. It may be in the past, present, or future.
Impersonal or non-finite forms of the verb
The Spanish non-finite verb forms refer to an action or state without indicating the time or the person. Spanish has three impersonal forms.
The infinitive is generally the form found in dictionaries. It corresponds to the English "base-form" or "dictionary form" and is usually indicated in English by "to _____" ("to sing," "to write," etc.). The ending of the infinitive is the basis of the names given in English to the three form classes of Spanish verbs:
- "-ar verbs" (primera conjugación ["first conjugation"])
- Examples: hablar ("to speak"); cantar ("to sing"); bailar ("to dance")
- "-er verbs" (segunda conjugación ["second conjugation"])
- Examples: beber ("to drink"); leer ("to read"); comprender ("to understand")
- "-ir verbs" (tercera conjugación ["third conjugation"])
- Examples: vivir ("to live"); sentir ("to feel"); escribir ("to write")
Although in English grammar the gerund refers to the -ing form of a verb used as a noun, in Spanish the term refers to a verb form that behaves more like an adverb.
- For -ar verbs, the ending is -ando.
- Examples: hablando ("speaking"); cantando ("singing"); bailando ("dancing")
- For -er verbs, the ending is -iendo.
- Examples: bebiendo ("drinking"); leyendo (with spelling change; "reading"); comprendiendo ("understanding")
- For -ir verbs, the ending is also -iendo.
- Examples: viviendo ("living"); sintiendo (with stem-vowel change; "feeling"); escribiendo ("writing")
The past participle corresponds to the English -en or -ed form.
- For -ar verbs, the ending is -ado.
- Examples: hablado ("spoken"); cantado ("sung"); bailado ("danced")
- For -er verbs, the regular ending is -ido.
- Examples: bebido ("drunk"); leído (requires accent mark; "read"); comprendido ("understood")
- For -ir verbs, the regular ending is also -ido.
- Examples: vivido ("lived"); sentido ("felt"); hervido ("boiled")
The past participle, ending invariably in -o, is used following a form of the auxiliary verb haber to form the compound or perfect: (Yo) he hablado ("I have spoken"); (Ellos) habían hablado ("They had spoken"); etc.
When the past participle is used as an adjective, it agrees with the noun that it modifies—for example, una lengua hablada en España ("a language spoken in Spain").
The past participle, similarly agreeing with the subject of ser or estar, can be used to form, respectively, the "true" passive voice (e.g., Los platos fueron preparados en la mañana ["The dishes were prepared in the morning"]) or the "passive of result" (e.g., Los platos ya están preparados ["The dishes are already prepared"]).
In grammar, the voice of a verb describes the relationship between the action (or state) that the verb expresses and the participants identified by its arguments (subject, object, etc.). When the subject is the agent or doer of the action, the verb is in the active voice. When the subject is the patient, target, or undergoer of the action, it is said to be in the passive voice.
Verbal aspect marks whether an action is completed (perfect), a completed whole (perfective), or not yet completed (imperfective).
- Perfect: In Spanish, verbs that are conjugated with haber ("to have [done something]") are in the perfect aspect.
- Perfective: In Spanish, verbs in the preterite are in the perfective aspect.
- Imperfective: In Spanish, the present, imperfect, and future tenses are in the imperfective aspect.
Verbal conjugations in Spanish
In this page, verb conjugation is illustrated with the verb hablar ("to talk," "to speak").
The indicative mood has five simple tenses, each of which has a corresponding perfect form. In older classifications, the conditional tenses were considered part of an independent conditional mood. Continuous forms (such as estoy hablando) are usually not considered part of the verbal paradigm, though they often appear in books addressed to English speakers who are learning Spanish. Modern grammatical studies count only the simple forms as tenses, and the other forms as products of tenses and aspects.
Simple tenses (tiempos simples)
The simple tenses are the forms of the verb without the use of a modal or helping verb. The following are the simple tenses and their uses:
The present tense is formed with the endings shown below:
|Pronoun subject||-ar verbs
Uses of the present indicative
This tense is used to indicate the following:
- Actual present. This expresses an action that is being done at the very moment.
- María habla con Juan por teléfono. ("María is speaking with Juan on the telephone").
- Habitual present. This expresses an action that is regularly and habitually being done.
- María llega al campo todos los sábados. ("María goes to the countryside every Saturday.")
- Atemporal present. This expresses general truths that are not bounded by time.
- Dos más dos son cuatro. ("Two plus two equals four.")
- Los planetas giran alrededor del sol. ("The planets revolve around the sun.")
- Historical present. This expresses an action that happened in the past but is accepted as historical fact.
- Fernando Magallanes descubre las Filipinas el 15 de marzo de 1521. ("Ferdinand Magellan discovered the Philippines on 15 March 1521.")
- An immediate future. This expresses an action that will be done in the very near future with a high degree of certainty.
- Este junio, viajo a España. ("This June, I am travelling to Spain.")
- Imperative value. In some areas of Spain and Hispanic America, the present can be used (with an exclamatory tone) with an imperative value.
- ¡Ahora te vas y pides disculpas al señor Ruiz! ("Now go and ask pardon from Mr. Ruiz!")
Imperfect (pretérito imperfecto)
The imperfect is formed with the endings shown below:
|Pronoun subject||-ar verbs||-er verbs||-ir verbs|
Uses of the imperfect
This tense is used to express the following:
- A habitual action in the past. This use expresses an action done habitually in an indefinite past. It does not focus on when the action ended.
- Cuando era pequeño, hablaba español con mi abuela. ("When I was young, I used to speak Spanish with my grandmother.")
- An action interrupted by another action. This expresses an action that was in progress when another action took place.
- Tomábamos la cena cuando Eduardo entró. ("We were having dinner when Eduardo came in.")
- General description of the past. This expresses a past setting, as, for example, the background for a narrative.
- Todo estaba tranquilo esa noche. Juan Eduardo miraba el partido de fútbol con su amigo Alejandro. Comían unas porciones de pizza. ("Everything was calm that night. Juan Eduardo was watching the football match with his friend Alejandro. They were eating some slices of pizza.")
Preterite (pretérito indefinido)
The preterite is formed with the endings shown below:
|Pronoun subject||-ar verbs||-er verbs||-ir verbs|
Uses of the preterite
This tense is used to express the following:
- An action that was done in the past. This use expresses an action that is viewed as a completed event. It is often accompanied by adverbial expressions of time, such as ayer, anteayer, or la semana pasada.
- Ayer, encontré la flor que tú me diste. ("Yesterday, I found the flower that you gave me.")
- An action that interrupts another action. This expresses an event that happened (and was completed) while another action was taking place.
- Tomábamos la cena cuando entró Eduardo. ("We were having dinner when Eduardo came in.")
- A general truth. This expresses a past relationship that is viewed as finished.
- Las Filipinas fueron parte del Imperio Español. ("The Philippines were part of the Spanish Empire.")
Future (futuro simple or futuro imperfecto)
The future tense uses the entire infinitive as a stem. The following endings are attached to it:
|Pronoun subject||(All regular verbs) Infinitive form + :|
Uses of the future
This tense is used to express the following:
- A future action. This expresses an action that will be done in the future.
- El año próximo, visitaré Buenos Aires. ("Next year, I shall/will visit Buenos Aires.")
- Uncertainty or Probability. This expresses inference, rather than direct knowledge.
- ¿Quién estará tocando a la puerta? — Será Fabio. ("Who (do you suppose) is knocking at the door? — It must be Fabio." or "Who will that be knocking at the door? — That'll be Fabio." This use of the future tense also occurs in English; see Future Tense, Relation among tense, aspect, and modality implications of "will" and "going to".)
- Command, prohibition, or obligation
- No llevarás a ese hombre a mi casa. ("Do not bring that man to my house." Or, more accurately, "You will not bring that man to my house." This form is also used to assert a command, prohibition, or obligation in English.)
- ¿Te importará encender la televisión? ("Would you mind turning on the television?")
Another common way to represent the future is with a present indicative conjugation of ir followed by a plus an infinitive verb: Voy a viajar a Bolivia en el verano. ("I'm going to travel to Bolivia in the summer.")
Compound tenses (tiempos compuestos)
All the compound tenses are formed with haber followed by the past participle of the main verb. Haber changes its form for person, number, and the like, while the past participle remains invariable, ending with -o regardless of the number or gender of the subject.
Present perfect (pretérito perfecto)
In the present perfect, the present indicative of haber is used as a modal, and it is followed by the past participle of the main verb. In most of Spanish America, this tense has virtually the same use as the English present perfect.
- E.g.: Te he dicho mi opinión. ("I have told you my opinion.")
In most of Spain the tense has an additional use—to express a past action or event that is contained in an unfinished period of time or that has effects in the present:
- Este mes ha llovido mucho, pero hoy hace buen día. ("It rained a lot this month, but today is a fine day.")
Past perfect or pluperfect (pretérito pluscuamperfecto)
In this tense, the imperfect form of haber is used as a modal, and it is followed by the past participle of the main verb.
- (yo) había + past participle
- (tú) habías + past participle
- (él/ella/ello/usted) había + past participle
- (nosotros/nosotras) habíamos + past participle
- (vosotros/vosotras) habíais + past participle
- (ellos/ellas/ustedes) habían + past participle
This form is used to express the following:
- A past action that occurred prior to another past action.
- E.g.: Yo había esperado tres horas cuando él llegó. ("I had been waiting for three hours when he arrived.")
Past anterior (pretérito anterior)
This tense combines the preterite form of haber with the past participle of the main verb. It is very rare in spoken Spanish, but it is sometimes used in formal written language, where it is almost entirely limited to subordinate (temporal, adverbial) clauses. Thus, it is usually introduced by temporal conjunctions such as cuando, apenas, or en cuanto. It is used to express an action that ended immediately before another past action.
- (yo) hube + past participle
- (tú) hubiste + past participle
- (él/ella/ello/usted) hubo + past participle
- (nosotros/nosotras) hubimos + past participle
- (vosotros/vosotras) hubisteis + past participle
- (ellos/ellas/ustedes) hubieron + past participle
- E.g.: Cuando hubieron llegado todos, empezó la ceremonia. ("When everyone had arrived, the ceremony began.")
- E.g.: Apenas María hubo terminado la canción, su padre entró. ("As soon as Maria had finished the song, her father came in.")
This tense is often replaced by either the preterite or the pluperfect, with the same meaning.
- E.g.: Apenas María terminó la canción, su padre entró.
- E.g.: Apenas María había terminado la canción, su padre entró.
Future perfect (futuro compuesto)
The future perfect is formed with the future indicative form of haber followed by the past participle of the main verb.
- (yo) habré + past participle
- (tú) habrás + past participle
- (él/ella/ello/usted) habrá + past participle
- (nosotros/nosotras) habremos + past participle
- (vosotros/vosotras) habréis + past participle
- (ellos/ellas/ustedes) habrán + past participle
- e.g.: Habré hablado. ("I shall/will have spoken.")
This tense is used to indicate a future action that will be finished right before another future action.
- e.g.: Cuando yo llegue a la fiesta, ya se habrán marchado todos. ("When I arrive at the party, everybody will have left already.")
Simple conditional (condicional simple or pospretérito)
As in the case of the future tense, the conditional uses the entire infinitive as a stem. The following endings are attached to it:
Uses of the conditional
This tense is used to express the following:
- Courtesy. Using this mood softens a request, making it more polite.
- E.g.: Señor, ¿podría darme una copa de vino? ("Sir, could you give me a glass of wine?")
- Polite expression of a desire (using querer).
- E.g.: Querría ver la película esta semana. ("I would like to see the film this week.")
- In a then clause whose realization depends on a hypothetical if clause.
- Si yo fuera rico, viajaría a Sudamérica. ("If I were rich, I would travel to South America.")
- Speculation about past events (the speaker's knowledge is indirect, unconfirmed, or approximating).
- E.g.: —¿Cuantas personas asistieron a la inauguración del Presidente? — No lo sé; habría unas 5.000. ("How many people attended the President's inauguration? — I do not know; there must have been about 5,000.")
- A future action in relation to the past. This expresses future action that was imagined in the past.
- E.g.: Cuando era pequeño, pensaba que me gustaría ser médico. ("When I was young, I thought that I would like to be a doctor.")
- A suggestion.
- E.g.: Yo que tú, lo olvidaría completamente. ("If I were you, I would forget him completely.")
Conditional perfect or compound conditional (condicional compuesto or antepospretérito)
This form refers to a hypothetical past action.
- E.g.: Yo habría hablado si me hubieran/hubiesen dado la oportunidad ("I would have spoken if they had given me the opportunity.")
The imperative mood has three specific forms, corresponding to the pronouns tú, vos, and vosotros (tú and vos are used in different regional dialects; vosotros only in Spain). These forms are used only in positive expressions, not negative ones. The subjunctive supplements the imperative in all other cases (negative expressions and the conjugations corresponding to the pronouns nosotros, él/ella, usted, ellos/ellas, and ustedes).
The imperative can also be expressed in three other ways:
- Using the present or future indicative to form an emphatic command: Comerás la verdura ("You will eat the vegetables").
- The first person plural imperative ("Let's...") can also be expressed by Vamos a + infinitive: ¡Vamos a comer!
- Indirect commands with que: Que lo llame el secretario ("Have the secretary call him").
Affirmative imperative (imperativo positivo)
The positive form of the imperative mood in regular verbs is formed by removing the infinitive ending and adding the following:
|Pronoun subject||-ar verbs||-er verbs||-ir verbs|
|vosotros/vosotras||-ad (-ar)||-ed (-er)||-id (-ir)|
The singular imperative tú coincides with the third-person singular of the indicative for all but a few irregular verbs. The plural vosotros is always the same as the infinitive, but with a final -d instead of an -r in the formal, written form; the informal spoken form is the same as the infinitive. The singular vos drops the -r of the infinitive, requiring a written accent to indicate the stress.
Negative imperative (imperativo negativo)
For the negative imperative, the adverb no is placed before the verb, and the following endings are attached to the stem:
|Pronoun subject||-ar verbs||-er verbs||-ir verbs|
Note that in the imperative, the affirmative second-person forms differ from their negative counterparts; this is the only case of a difference in conjugation between affirmative and negative in Spanish.
- To conjugate something that is positive in the imperative mood for the tú form (which is used most often), conjugate for the tú form and drop the s.
- To conjugate something that is negative in the imperative mood for the tú form (which also is used most often), conjugate in the yo form, drop the o, add the opposite tú ending (if it is an -ar verb add es; for an -er or -ir verb add as), and then put the word no in front.
Positive command forms of the verb comer
|tú||¡Come!||"Eat!"||General form of the informal singular|
|vos||¡Comé!||"Eat!"||Used in the Ríoplatense Dialect and much of Central America. Formerly, vos and its verb forms were not accepted by the Real Academia Española, but the latest online dictionary of the RAE shows, for example, the vos imperative ‘’comé’’ on a par with the tú imperative come.|
|nosotros/nosotras||¡Comamos!||"Let us eat!"||Used as an order or as an invitation.|
|vosotros/vosotras||¡Comed!||"Eat!"||Normative plural for informal address, though its use is becoming rare|
|vosotros/vosotras||¡Comer!||"Eat!"||Common plural used in Spain for informal address, though not admitted by the Real Academia Española|
|ustedes||¡Coman!||"Eat!"||General plural formal command; used also as familiar plural command in Spanish America|
Negative command forms of the verb comer
|tú||¡No comas!||"Do not eat!"||General form of the informal singular|
|vos||¡No comas!||"Do not eat!"||Used in the voseo areas; the only form accepted by the Real Academia Española|
|vos||¡No comás!||"Do not eat!"||Used by the general voseante population; not accepted by the Real Academia Española|
|usted||¡No coma!||"Do not eat!"||Formal singular|
|nosotros/nosotras||¡No comamos!||"Let's not eat!"||Used as a suggestion|
|vosotros/vosotras||¡No comáis!||"Do not eat!"||Informal plural in Spain|
|ustedes||¡No coman!||"Do not eat!"||General negative plural formal command; used also as familiar plural command in Spanish America|
The pronominal verb comerse
|tú||¡Cómete ...!||"Eat!"||Used emphatically|
|vos||¡Comete ...!||"Eat!"||Used normatively in the Ríoplatense Dialect; used informally in Central America|
|usted||¡Cómase ...!||"Eat!"||Formal singular|
|nosotros/nosotras||¡Comámonos ...!||"Let's eat!"||the original -s ending is dropped before the pronoun nos is affixed to prevent cacophony or dissonant sound|
|vosotros/vosotras||¡Comeos ...!||"Eat!"||The original -d ending is dropped before the pronoun os is affixed to prevent cacophony or dissonant sound|
|vosotros/vosotras||¡Comeros ...!||"Eat!"||Colloquial plural used in Spain for informal address, though not admitted by the Real Academia Española|
|ustedes||¡Cómanse ...!||"Eat!"||General plural formal command; used also as familiar plural command in Spanish America|
Note that the pronouns precede the verb in the negative commands as the mode is subjunctive, not imperative: no te comas/comás; no se coma/coman; no nos comamos; no os comáis.
The verb ir
|Subject Pronoun||Imperative Form||Gloss||Remarks|
|tú||¡Ve!||"Go!"||General form of the singular imperative|
|vos||¡Andá!||"Go!"||Used because the general norm in the voseo imperative is to drop the final -d and add an accent; however, if this were done, the form would be í|
|usted||¡Vaya!||"Go!"||Same as the subjunctive form|
|nosotros/nosotras||¡Vamos!||"Let's go!"||More common form|
|nosotros/nosotras||¡Vayamos!||"Let's go!"||Prescribed form, but rarely used|
|ustedes||¡Vayan!||"Go!"||Formal plural; also familiar in Spanish America|
The pronominal verb irse is irregular in the second person plural normative form, because it does not drop the -d or the -r:
- ¡idos! (vosotros): "Go away!" (plural for informal address, recommended by the Real Academia Española but extremely uncommon)
- ¡iros! (vosotros): "Go away!" (common in Spain, but not admitted by the Real Academia Española)
The subjunctive mood has a separate conjugation table with fewer tenses. It is used, almost exclusively in subordinate clauses, to express the speaker's opinion or judgment, such as doubts, possibilities, emotions, and events that may or may not occur.
Simple tenses (tiempos simples)
Present subjunctive (presente de subjuntivo)
The present subjunctive of regular verbs is formed with the endings shown below:
|Pronoun subject||‑ar verbs||‑er verbs||‑ir verbs||Remarks|
|tú/vos||-es||-as||-as||For vos, the Spanish Royal Academy prescribes Rioplatense Spanish: ames, comas and partas|
|vos||-és||-ás||-ás||In Central America, amés, comás, and partás are the preferred present subjunctive forms of vos, but they are not accepted by the Spanish Royal Academy|
Imperfect subjunctive (imperfecto de subjuntivo)
The imperfect subjunctive can be formed with either of two sets of endings: the "-ra endings" or the "-se endings", as shown below. In Spanish America, the -ra forms are virtually the only forms used, to the exclusion of the -se forms. In Spain, both sets of forms are used, but the -ra forms predominate there also.
Imperfect subjunctive, -ra forms
|Pronoun subject||-ar verbs||-er verbs||-ir verbs|
Imperfect subjunctive, -se forms
|Pronoun subject||-ar verbs||-er verbs||-ir verbs|
Future subjunctive (futuro (simple) de subjuntivo)
This tense is no longer used in the modern language, except in legal language and some fixed expressions. The following endings are attached to the preterite stem:
|Pronoun subject||-ar verbs||-er verbs||-ir verbs|
- E.g.: Cuando hablaren... ("Whenever they might speak...")
Compound tenses (tiempos compuestos)
In the subjunctive mood, the subjunctive forms of the verb haber are used with the past participle of the main verb.
Present perfect subjunctive (pretérito perfecto de subjuntivo)
- E.g.: Cuando yo haya hablado... ("When I have spoken...")
Pluperfect subjunctive (pluscuamperfecto de subjuntivo)
- E.g.: Si yo hubiera hablado... or Si yo hubiese hablado... ("If I had spoken...")
Future perfect subjunctive (futuro compuesto de subjuntivo)
Like the simple future subjunctive, this tense is no longer used in the modern language.
- E.g.: Cuando yo hubiere hablado... ("When I shall have spoken...")
- The present subjunctive is formed from the stem of the first person present indicative of a verb. Therefore, for an irregular verb like salir with the first person salgo, the present subjunctive would be salga, not sala.
- The choice between present subjunctive and imperfect subjunctive is determined by the tense of the main verb of the sentence.
- The future subjunctive is rarely used in modern Spanish and mostly appears in old texts, legal documents, and certain fixed expressions, such as venga lo que viniere ("come what may").
In Spanish grammars, continuous tenses are not formally recognized as in English. Although the imperfect expresses a continuity compared to the perfect (e.g., te esperaba ["I was waiting for you"]), the continuity of an action is usually expressed by a verbal periphrasis (perífrasis verbal), as in estoy leyendo ("I am reading"). However, one can also say sigo leyendo ("I am still reading"), voy leyendo ("I am slowly but surely reading"), ando leyendo ("I am going around reading"), and others.
A considerable number of verbs change the vowel e in the stem to the diphthong ie, and the vowel o to ue. This happens when the stem vowel receives the stress. These verbs are referred to as stem-changing verbs. Examples include pensar ("to think"; e.g., pienso ["I think"]), sentarse ("to sit"; e.g., me siento ["I sit"]), empezar ("to begin"; e.g., empiezo ["I begin"]), volver ("to return"; e.g., vuelvo ["I return"]), and acostarse ("to go to bed"; e.g., me acuesto ["I go to bed"]).
Virtually all verbs of the third conjugation (-ir), if they have -e- or -o- in their stem, undergo a vowel-raising change whereby e changes to i and o changes to u, in some of their forms (for details, see Spanish irregular verbs). Examples include pedir ("to ask for"; e.g., pide ["he/she asks for"]), competir ("to compete"; e.g., compite ["he/she competes"]), and derretirse ("to melt"; e.g., se derrite ["it melts"]).
The so-called I-go verbs add a medial -g- in the first-person singular present tense (making the Yo ["I"] form end in -go; e.g., tener ["to have"] becomes tengo ["I have"]; venir ["to come"] becomes vengo ["I come"]). These verbs are often irregular in other forms as well.
Use of verbs
Contrasting simple and continuous forms
There is no strict distinction between simple and continuous forms in Spanish as there is in English. In English, "I do" is one thing (a habit) and "I am doing" is another (current activity). In Spanish, hago can be either of the two, and estoy haciendo stresses the latter. Although not as strict as English, Spanish is stricter than French or German, which have no systematic distinction between the two concepts at all. This optionally continuous meaning that can be underlined by using the continuous form as a feature of the present and imperfect. The preterite never has this meaning, even in the continuous form, and the future has it only when it is in the continuous form.
- ¿Qué haces? could be either "What do you do?" or "What are you doing?"
- ¿Qué estás haciendo? is only "What are you doing?"
- ¿Qué hacías? could be either "What did you used to do?" or "What were you doing?"
- ¿Qué estabas haciendo? is only "What were you doing?"
- ¿Qué hiciste? is "What did you do?"
- ¿Qué estuviste haciendo? is "What were you doing (all of that time)?"
Note that since the preterite by nature refers to an event seen as having a beginning and an end, and not as a context, the use of the continuous form of the verb only adds a feeling for the length of time spent on the action. The future has two main forms in Spanish, the imperfect (compound) future and the simple one. The difference between them is one of aspect. The compound future is done with the conjugated ir (which means "to go," but may also mean "will" in this case) plus the infinitive and, sometimes, with a present progressive verb added as well.
- ¿Qué vas a hacer? is "What are you going to do?" (implies that it will be done again, as in a routine)
- ¿Qué vas a estar haciendo? is "What are you going to be doing?" (does not necessarily imply that it will be done)
- ¿Qué harás? is "What will you do?" (will be completed immediately, or done just once)
- ¿Qué estarás haciendo? is "What will you be doing?"
Contrasting the present and the future
Both the present and the future can express future actions, the latter more explicitly so. There are also expressions that convey the future.
- Mi padre llega mañana = "My father arrives tomorrow" (out of context, llega could mean both "he is arriving now" or "he usually arrives")
- Mi padre estará llegando mañana = "My father will be arriving tomorrow"
- Mi padre va a llegar mañana = "My father is going to arrive tomorrow" (future with ir)
- Mi padre llegará mañana = "My father will arrive tomorrow" (future tense)
- Mi padre está a punto de llegar = "My father is about to arrive" (immediate future with estar a punto)
The future tense can also simply express guesses about the present and immediate future:
- ¿Qué hora es? Serán las tres = "What time is it?" "It is about three (but I have not checked)"
- ¿Quién llama a la puerta? Será José = "Who is at the door? It must be José"
The same is applied to imperfect and conditional:
- ¿Qué hora era? Serían las tres = "What time was it?" "It was about three (but I had not checked)"
- ¿Quién llamaba a la puerta? Sería José = "Who was at the door? It must have been José"
Studies have shown that Spanish-speaking children learn this use of the future tense before they learn to use it to express future events (the English future with "will" can also sometimes be used with this meaning). The other constructions detailed above are used instead. Indeed, in some areas, such as Argentina and Uruguay, speakers hardly use the future tense to refer to the future.
The future tense of the subjunctive mood is also obsolete in practice. As of today, it is only found in legal documents and the like. In other contexts, the present subjunctive form always replaces it.
Contrasting the preterite and the imperfect
Fundamental meanings of the preterite and the imperfect
Spanish has two fundamental past tenses, the preterite and the imperfect. Strictly speaking, the difference between them is one of not tense but aspect, in a manner that is similar to that of the Slavic languages. However, within Spanish grammar, they are customarily called tenses.
The difference between the preterite and the imperfect (and in certain cases, the perfect) is often hard to grasp for English speakers. English has just one past-tense form, which can have aspect added to it by auxiliary verbs, but not in ways that reliably correspond to what occurs in Spanish. The distinction between them does, however, correspond rather well to the distinctions in other Romance languages, such as between the French imparfait and passé simple / passé composé or between the Italian imperfetto and passato remoto / passato prossimo.
The imperfect fundamentally presents an action or state as being a context and is thus essentially descriptive. It does not present actions or states as having ends and often does not present their beginnings either. Like the Slavic imperfective past, it tends to show actions that used to be done at some point, as in a routine. In this case, one would say Yo jugaba ("I used to play"), Yo leía ("I used to read"), or Yo escribía ("I used to write").
The preterite (as well as the perfect, when applicable) fundamentally presents an action or state as being an event, and is thus essentially narrative. It presents actions or states as having beginnings and ends. This also bears resemblance to the Slavic perfective past, as these actions are usually viewed as done in one stroke. The corresponding preterite forms would be Yo jugué ("I played"), Yo leí ("I read") or Yo escribí ("I wrote").
As stated above, deciding whether to use the preterite or the imperfect can present some difficulty for English speakers. But there are certain topics, words, and key phrases that can help one decide if the verb should be conjugated in the preterite or the imperfect. These expressions co-occur significantly more often with one or the other of the two tenses, corresponding to a completed action (preterite) or a repetitive action or a continuous action or state (imperfect) in the past.
Key words and phrases that tend to co-occur with the preterite tense:
E.g.: Esta mañana comí huevos y pan tostado. ("This morning I ate eggs and toast.")
Key words and phrases that tend to co-occur with the imperfect tense:
E.g.: Cada año mi familia iba a Puerto Rico. ("Each year my family went to Puerto Rico.")
Comparison with English usage
The English simple past can express either of these concepts. However, there are devices that allow us to be more specific. Consider, for example, the phrase "the sun shone" in the following contexts:
- "The sun shone through his window; John knew that it was going to be a fine day."
- "The sun was shining through his window; John knew that it was going to be a fine day."
- "The sun shone through his window back in those days."
- "The sun used to shine through his window back in those days."
- "The sun shone through his window the moment that John pulled back the curtain."
In the first two, it is clear that the shining refers to the background to the events that are about to unfold in the story. It is talking about what was happening. We have a choice between making this explicit with the past continuous, as in (2), or using the simple past and allowing the context to make it clear what we mean, as in (1). In Spanish, these would be in the imperfect, optionally in the imperfect continuous.
In (3) and (4), it is clear that the shining refers to a regular, general, habitual event. It is talking about what used to happen. We have a choice between making this explicit with the expression "used to," as in (4), or using the simple past and allowing the context to make it clear what we mean, as in (3). In Spanish, these would be in the imperfect, optionally with the auxiliary verb soler.
In (5), only the simple past is possible. It is talking about a single event presented as occurring at a specific point in time (the moment John pulled back the curtain). The action starts and ends with this sentence. In Spanish, this would be in the preterite (or alternatively in the perfect, if the event has only just happened).
- Cuando tenía quince años, me atropelló un coche = "When I was fifteen years old, a car ran over me"
The imperfect is used for "was" in Spanish because it forms the background to the specific event expressed by "was run over", which is in the preterite.
- Mientras cruzaba / estaba cruzando la calle, me atropelló un coche = "While I crossed / was crossing the road, a car ran over me"
In both languages, the continuous form for action in progress is optional, but Spanish requires the verb in either case to be in the imperfect, because it is the background to the specific event expressed by "was run over", in the preterite.
- Siempre tenía cuidado cuando cruzaba la calle = "I was always / always used to be careful when I crossed / used to cross the road"
The imperfect is used for both verbs since they refer to habits in the past. Either verb could optionally use the expression "used to" in English.
- Me bañé = "I took a bath"
The preterite is used if this refers to a single action or event—that is, the person took a bath last night.
- Me bañaba = "I took baths"
The imperfect is used if this refers to any sort of habitual action—that is, the person took a bath every morning. Optionally, solía bañarme can specifically express "I used to take baths".
- Tuvo una hija = "She had a daughter"
The preterite is used if this refers to an event—here, a birth.
- Tenía una hija = "She had a/one daughter"
The imperfect is used if this refers to the number of children by a certain point, as in "She had one daughter when I met her ten years ago; she may have more now". A description.
Note that when describing the life of someone who is now dead, the distinction between the two tenses blurs. One might describe the person's life saying tenía una hija, but tuvo una hija is very common because the person's whole life is viewed as a whole, with a beginning and an end. The same goes for vivía/vivió en... "he lived in...".
Perhaps the verb that English speakers find most difficult to translate properly is "to be" in the past tense ("was"). Apart from the choice between the verbs ser and estar (see below), it is often very hard for English speakers to distinguish between contextual and narrative uses.
- Alguien cogió mis CD. ¿Quién fue? = "Someone took my CDs. Who was it?"
Here the preterite is used because it is an event. A good clue is the tense in which cogió is.
- Había una persona que miraba los CD. ¿Quién era? = "There was a person who was looking at the CDs. Who was it?"
Here the imperfect is used because it is a description (the start and end of the action is not presented; it is something that was in progress at a certain time). Again, a good clue is the tense of the other verbs.
Contrasting the preterite and the perfect
The preterite and the perfect are distinguished in a similar way as the equivalent English tenses. Generally, whenever the present perfect ("I have done") is used in English, the perfect is also used in Spanish. In addition, there are cases in which English uses a simple past ("I did") but Spanish requires a perfect. In the remaining cases, both languages use a simple past.
As in English, the perfect expresses past actions that have some link to the present. The preterite expresses past actions as being past, complete and done with. In both languages, there are dialectal variations.
Frame of reference includes the present: perfect
If it is implicitly or explicitly communicated that the frame of reference for the event includes the present and the event or events may therefore continue occurring, then both languages strongly prefer the perfect.
- With references including "this" including the present
- Este año me he ido de vacaciones dos veces = "This year I have gone on vacation twice"
- Esta semana ha sido muy interesante = "This week has been very interesting"
- With other references to recent periods including the present
- No he hecho mucho hoy = "I have not done much today"
- No ha pasado nada hasta la fecha = "Nothing has happened to date"
- Hasta ahora no se me ha ocurrido = "Until now it has not occurred to me"
- With reference to someone's life experience (his/her life not being over)
- ¿Alguna vez has estado en África? = "Have you ever been in Africa?"
- Mi vida no ha sido muy interesante = "My life has not been very interesting"
- Jamás he robado nada = "Never have I stolen anything"
Frame of reference superficially includes the present: perfect
Sometimes we say "today", "this year", and the like, but we mean to express these periods as finished. This requires the simple past in English. For example, in December we might speak of the year in the simple past because we are assuming that all of that year's important events have occurred and we can talk as though it were over. Other expressions—such as "this weekend," if today is Monday—refer to a period which is definitely over; the word "this" just distinguishes it from other weekends. There is a tendency in Spanish to use the perfect even for this type of time reference, even though the preterite is possible and seems more logical.
- Este fin de semana hemos ido al zoo = "This weekend we went to the zoo"
- Hoy he tenido una jornada muy aburrida = "Today I had a boring day's work"
Consequences continue into the present: perfect
As in English, the perfect is used when the consequences of which an event are referred.
- Alguien ha roto esta ventana = "Someone has broken this window" (the window is currently in a broken state)
- Nadie me ha dicho qué pasó aquel día = "Nobody has told me what happened that day" (therefore, I still do not know)
These same sentences in the preterite would purely refer to the past actions, without any implication that they have repercussions now.
In English, this type of perfect is not possible if a precise time frame is added or even implied. One cannot say "I have been born in 1978," because the date requires "I was born," despite the fact there is arguably a present consequence in the fact that the person is still alive. Spanish sporadically uses the perfect in these cases.
- He nacido en 1978 (usually Nací en 1978) = "I was born in 1978"
- Me he criado en Madrid (usually Me crié en Madrid) = "I grew up in Madrid"
The event itself continues into the present: perfect or present
If the event itself has been happening recently and is also happening right now or expected to continue happening soon, then the preterite is impossible in both languages. English requires the perfect, or better yet the perfect continuous. Spanish requires the perfect, or better yet the present simple:
- Últimamente ha llovido mucho / Últimamente llueve mucho = "It has rained / It has been raining a lot recently"
This is the only use of the perfect that is common in colloquial speech across Latin America.
In the Canary Islands and across Latin America, there is a colloquial tendency to replace most uses of the perfect with the preterite. This use varies according to region, register, and education.
- ¿Y vos alguna vez estuviste allá? = ¿Y tú alguna vez has estado allí? = "And have you ever been there?"
The one use of the perfect that does seem to be normal in Latin America is the perfect for actions that continue into the present (not just the time frame, but the action itself). Therefore, "I have read a lot in my life" and "I read a lot this morning" would both be expressed with leí instead of he leído, but "I have been reading" is expressed by he leído.
A less standard use of the perfect is found in Ecuador and Colombia. It is used with present or occasionally even future meaning. For example, Shakira Mebarak in her song "Ciega, Sordomuda" sings,
- Bruta, ciega, sordomuda, / torpe, traste, testaruda; / es todo lo que he sido = "Clumsy, blind, dumb, / blundering, useless, pig-headed; / that is all that I had been"
Contrasting the subjunctive and the imperative
The subjunctive mood expresses wishes and hypothetical events. It is often employed together with a conditional verb:
- Desearía que estuvieses aquí. = "I wish that you were here."
- Me alegraría mucho si volvieras mañana. = "I would be very glad if you came back tomorrow."
The imperative mood shows commands given to the hearer (the second person). There is no imperative form in the third person, so the subjunctive is used. The expression takes the form of a command or wish directed at the hearer, but referring to the third person. The difference between a command and a wish is subtle, mostly conveyed by the absence of a wishing verb:
- Que venga el gerente. = "Let the manager come.", "Have the manager come."
- Que se cierren las puertas. = "Let the doors be closed.", "Have the doors closed."
With a verb that expresses wishing, the above sentences become plain subjunctive instead of direct commands:
- Deseo que venga el gerente. = "I wish for the manager to come."
- Quiero que se cierren las puertas. = "I want the doors (to be) closed."
Contrasting the present and the future subjunctive
The future tense of the subjunctive is found mostly in old literature or legalese and is even misused in conversation by confusing it with the past tense (often due to the similarity of its characteristic suffix, -ere, as opposed to the suffixes of the past tense, -era and -ese). Many Spanish speakers live their lives without ever knowing about or realizing the existence of the future subjunctive.
It survives in the common expression sea lo que fuere and the proverb allá donde fueres, haz lo que vieres (allá donde can be replaced by a la tierra donde or si a Roma).
The proverb illustrates how it used to be used:
- With si referring to the future, as in si a Roma fueres.... This is now expressed with the present indicative: si vas a Roma... or si fueras a Roma...
- With cuando, donde, and the like, referring to the future, as in allá donde fueres.... This is now expressed with the present subjunctive: vayas adonde vayas...
Contrasting the preterite and the past anterior
The past anterior is rare nowadays and restricted to formal use. It expresses a very fine nuance: the fact that an action occurs just after another (had) occurred, with words such as cuando, nada más, and en cuanto ("when", "no sooner", "as soon as"). In English, we are forced to use either the simple past or the past perfect; Spanish has something specific between the two.
- En cuanto el delincuente hubo salido del cuarto, la víctima se echó a llorar = "As soon as the criminal (had) left the room, the victim burst into tears"
The use of hubo salido shows that the second action happened immediately after the first. Salió might imply that it happened at the same time, and había salido might imply it happened some time after.
However, colloquial Spanish has lost this tense and this nuance, and the preterite must be used instead in all but the most formal of writing.
Contrasting ser and estar
The differences between ser and estar are considered one of the most difficult concepts for non-native speakers. Both ser and estar translate into English as "to be", but they have different uses, depending on whether they are used with nouns, with adjectives, with past participles (more precisely, passive participles), or to express location.
Only ser is used to equate one noun phrase with another, and thus it is the verb for expressing a person's occupation ("Mi hermano es estudiante"/"My brother is a student"). For the same reason, ser is used for telling the date or the time, regardless of whether the subject is explicit ("Hoy es miércoles"/"Today is Wednesday") or merely implied ("Son las ocho"/"It's eight o'clock").
When these verbs are used with adjectives, the difference between them may be generalized by saying that ser expresses nature and estar expresses state. Frequently—although not always—adjectives used with ser express a permanent quality, while their use with estar expresses a temporary situation. There are exceptions to the generalization; for example, the sentence "Tu mamá está loca" ("Your mother is crazy") can express either a temporary or a permanent state of craziness.
Ser generally focuses on the essence of the subject, and specifically on qualities that include:
- Physical and personality traits
Estar generally focuses on the condition of the subject, and specifically on qualities that include:
- Physical condition
- Feelings, emotions, and states of mind
In English, the sentence "The boy is boring" uses a different adjective than "The boy is bored". In Spanish, the difference is made by the choice of ser or estar.
- El chico es aburrido uses ser to express a permanent trait ("The boy is boring").
- El chico está aburrido uses estar to express a temporary state of mind ("The boy is bored").
The same strategy is used with many adjectives to express either an inherent trait (ser) or a transitory state or condition (estar). For example:
- "María es guapa" uses ser to express an essential trait, meaning "María is a good-looking person."
- "María está guapa" uses estar to express a momentary impression: "María looks beautiful" (a comment on her present appearance, without any implication about her inherent characteristics).
When ser is used with the past participle of a verb, it forms the "true" passive voice, expressing an event ("El libro fue escrito en 2005"/"The book was written in 2005"). When the past participle appears with estar, it forms a "passive of result" or "stative passive" ("El libro ya está escrito"/"The book is already written"—see Spanish conjugation).
Location of a person or thing is expressed with estar—regardless of whether temporary or permanent ("El hotel está en la esquina"/"The hotel is on the corner"). Location of an event is expressed with ser ("La reunión es en el hotel"/"The meeting is [takes place] in the hotel").
Contrasting haber and tener
The verbs haber and tener are easily distinguished, but they may pose a problem for learners of Spanish who speak other Romance languages (where the cognates of haber and tener are used differently), for English speakers (where "have" is used as a verb and as an auxiliary), and others.
Haber derives from the Latin habeō, habēre, habuī, habitum; with the basic meaning of "to have".
Tener derives from the Latin teneō, tenēre, tenuī, tentum; with the basic meaning of "to hold", "to keep".
As habeo began to degrade and become reduced to just ambiguous monosyllables in the present tense, the Iberian Romance languages (Spanish, Gallician-Portuguese, and Catalan) restricted its use and started to use teneo as the ordinary verb expressing having and possession. French instead reinforced habeo with obligatory subject pronouns.
Haber: expressing existence
Haber is used as an impersonal verb to show existence of an object or objects, which is generally expressed as an indefinite noun phrase. In English, this corresponds to the use of "there" + the corresponding inflected form of "to be". When used in this sense, haber has a special present-tense form: hay instead of ha. The y is a fossilized form of the mediaeval Castilian pronoun y or i, meaning "there", which is cognate with French y and Catalan hi, and comes from the Latin ibi.
Unlike in English, the thing that "is there" is not the subject of the sentence, and therefore there is no agreement between it and the verb. This echoes the constructions seen in languages such as French (il y a = "it there has"), Catalan (hi ha = "[it] there has"), and even Chinese (有 yǒu = "[it] has").
- Hay un gato en el jardín. = "There is a cat in the garden."
- En el baúl hay fotografías viejas. = "In the trunk there are old photographs."
It is possible, in cases of certain emphasis, to put the verb after the object:
- ¿Revistas hay? = "Are there any magazines?"
There is a tendency to make haber agree with what follows, as though it were the subject, particularly in tenses other than the present indicative. There is heavier stigma on inventing plural forms for hay, but hain, han, and suchlike are sometimes encountered in non-standard speech. The form habemos is common (meaning "there are, including me"); it very rarely replaces hemos to form the present perfect tense in modern language, and in certain contexts it is even acceptable in formal or literary language.
- Había un hombre en la casa. = "There was a man in the house."
- Había unos hombres en la casa. = "There were some men in the house." (standard)
- Habían unos hombres en la casa. = "There were some men in the house." (non-standard)
- En esta casa habemos cinco personas. = "In this house there are five of us." (non-standard).
- Nos las habemos con un gran jugador. = "We are confronting a great player." (standard)
Haber as an existence verb is never used in other than the third person. To express existence of a first or second person, the verb estar ("to be [located/present]") or existir ("to exist") is used, and there is subject–verb agreement.
Haber: impersonal obligation
The phrase haber que (in the third person singular and followed by a subordinated construction with the verb in the infinitive) carries the meaning of necessity or obligation without specifying an agent. It is translatable as "it is necessary", but a paraphrase is generally preferable in translation. Note that the present-tense form is hay.
- Hay que abrir esa puerta. = "That door needs opening", "We have to open that door".
- Habrá que abrir esa puerta. = "That door will need opening", "We are going to have to open that door".
- Aunque haya que abrir esa puerta. = "Even if that door needs to be opened".
This construction is comparable to French il faut and Catalan cal, although it should be noted that a personal construction with the subjunctive is not possible.[clarification needed] Hay que always goes with the infinitive.
Haber: personal obligation
A separate construction is haber de + infinitive. It is not impersonal. It tends to express a certain nuance of obligation and a certain nuance of future tense, much like the expression "to be to". It is also often used similarly to tener que and deber ("must", "ought to"). Note that the third personal singular of the present tense is ha.
- Mañana he de dar una charla ante la Universidad = "Tomorrow I am to give a speech before the University".
- Ha de comer más verduras = "She/he ought to eat more vegetables".
Haber: forming the perfect
Haber is also used as an auxiliary to form the perfect, as shown elsewhere. Spanish uses only haber for this, unlike French and Italian, which use the corresponding cognates of haber for most verbs, but cognates of ser ("to be") for certain others.
- Ella se ha ido al mercado. = "She has gone to the market."
- Ellas se han ido de paseo. = "They have gone on a walk."
- ¿Habéis fregado los platos? = "Have you (all) done the washing-up?"
Tener is a verb with the basic meaning of "to have", in its essential sense of "to possess", "to hold", "to own". As in English, it can also express obligation (tener que + infinitive). It also appears in a number of phrases that show emotion or physical states, expressed by nouns, which in English tend to be expressed by "to be" and an adjective.
- Mi hijo tiene una casa nueva. = "My son has a new house."
- Tenemos que hablar. = "We have to talk."
- Tengo hambre. = "I am hungry", literally "I have hunger."
There are numerous phrases like tener hambre that are not literally translated in English, such as:
- tener hambre = "to be hungry"; "to have hunger"
- tener sed = "to be thirsty"; "to have thirst"
- tener cuidado = "to be careful"; "to have caution"
- tener __ años = "to be __ years old"; "to have __ years"
- tener celos = "to be jealous"; "to have jealousy"
- tener éxito = "to be successful"; "to have success"
- tener vergüenza = "to be ashamed"; "to have shame"
Note: Estar hambriento is a literal translation of "To be hungry", but it is rarely used in Spanish nowadays.
Verbs are negated by putting no before the verb. Other negative words can either replace this no or occur after the verb:
- Hablo español = "I speak Spanish"
- No hablo español = "I do not speak Spanish"
- Nunca hablo español = "I never speak Spanish"
- No hablo nunca español = "I do not ever speak Spanish"
Spanish verbs describing motion tend to emphasize direction instead of manner of motion. According to the pertinent classification, this makes Spanish a verb-framed language. This contrasts with English, where verbs tend to emphasize manner, and the direction of motion is left to helper particles, prepositions, or adverbs.
- "We drove away" = Nos fuimos en coche (literally, "We went (away) by car").
- "He swam to Ibiza" = Fue a Ibiza nadando (literally, "He went to Ibiza swimming").
- "They ran off" = Huyeron corriendo (literally, "They fled running").
- "She crawled in" = Entró a gatas (literally, "She entered on all fours").
Quite often, the important thing is the direction, not the manner. Therefore, although "we drove away" translates into Spanish as nos fuimos en coche, it is often better to translate it as just nos fuimos. For example:
- "I drove her to the airport, but she had forgotten her ticket, so we drove home to get it, then drove back towards the airport, but then had to drive back home for her passport, by which time there was zero chance of checking in..."
- La llevé al aeropuerto en coche, pero se le había olvidado el tiquete, así que fuimos a casa [en coche] por él, luego volvimos [en coche] hacia el aeropuerto, pero luego tuvimos que volver [en coche] por el pasaporte, y ya era imposible que consiguiésemos facturar el equipaje...
|For a list of words relating to Spanish verbs, see the Spanish verbs category of words in Wiktionary, the free dictionary.|
- Spanish conjugation
- List of Spanish irregular participles
- Wikiversity Spanish verb database, under construction
- "Complete" here means having forms for all three grammatical persons in both singular and plural.
- "Incomplete", with reference to the imperative, means having forms only for the second person and the first-person plural, and lacking third-person forms.
- In José Rizal's Noli me tangere, Salomé uses vosotros to refer to Elías and his passengers that day. In its sequel, El filibusterismo, in the chapter entitled Risas, llantos, Sandoval addresses his fellow students using vosotros.
- Other Ways of Making Commands and Requests
- Diccionario de la lengua española. Click the blue button labeled "Conjugar".
- See amar, comer and partir in the Dictionary of the Royal Academy.
- See Corpus del Español
- Diccionario panhispánico de dudas, haber
- Spanish Idioms of the Form 'Tener' + Noun - Learn Spanish Language
|Wikibooks has a book on the topic of: Spanish|
- LinguaKit Spanish verb conjugator. It conjugates all regular and irregular verbs, and even invented verbs. It warns about unknown verbs. It includes other language and other languages like Portuguese, Galician and English.
- Conjugación Spanish conjugator. 12,000 verbs conjugated.
- Verbix Spanish verb conjugator. It conjugates all regular and irregular verbs, and even invented verbs. It warns about unknown verbs.
- Onoma Spanish conjugator. It provides information about the irregularities and conjugates invented verbs.
- Spanish Verb Tenses Verb conjugations and information about irregular verbs and verbs with prepositions.
- The difficulties involved in Spanish-English translation
- Free Online Course on the Spanish Subjunctive Verb tense and Listening Activities
- Spanish Verb Conjugation: Grammar - Spanish Verbs
- WebWorkbooks: Grammar - Spanish Verbs
- Verb Conjugations: Grammar: Online Spanish Help
- ¡Es fácil! - Spanish Verb conjugations with online practice
- Intro2Spanish - Extensive List of Spanish Verbs
- Spanish Verb Conjugation
- Online Spanish verb conjugation Free online Spanish verb conjugation
- Language by Video Short Videos Demonstrating Use of Spanish Verbs
- Spanish Lessons - Spanish verbs, grammar, slang, idioms and culture
- Learn the Spanish language - Great way to learn conversational Spanish
- Spanish Verb Conjugation - 724 verbs, 18 forms, fast search
- Spanish verb conjugator don Quijote Spanish School
- Spanish Lesson Sample - Free Online Spanish Lessons
- Spanish tutor - Private Spanish Lessons in London | https://en.wikipedia.org/wiki/Spanish_verbs |
4.09375 | Modern humans reached Arabia earlier than thought, new artifacts suggest
The timing and dispersal of modern humans out of Africa has been the source of long-standing debate, though most evidence has pointed to an exodus along the Mediterranean Sea or along the Arabian coast approximately 60,000 years ago. This new research, placing early humans on the Arabian Peninsula much earlier, will appear in the 28 January issue of Science, which is published by AAAS, the nonprofit science society.
The team of researchers, including lead author Simon Armitage from Royal Holloway, University of London, discovered an ancient human toolkit at the Jebel Faya archaeological site in the United Arab Emirates. It resembles technology used by early humans in east Africa but not the craftsmanship that emerged from the Middle East, they say. This toolkit includes relatively primitive hand-axes along with a variety of scrapers and perforators, and its contents imply that technological innovation was not necessary for early humans to migrate onto the Arabian Peninsula. Armitage calculated the age of the stone tools using a technique known as luminescence dating and determined that the artifacts were about 100,000 to 125,000 years old.
"These 'anatomically modern' humans — like you and me — had evolved in Africa about 200,000 years ago and subsequently populated the rest of the world," said Armitage. "Our findings should stimulate a re-evaluation of the means by which we modern humans became a global species."
Uerpmann and his team also analyzed sea-level and climate-change records for the region during the last interglacial period, approximately 130,000 years ago. They determined that the Bab al-Mandab Strait, which separates Arabia from the Horn of Africa, would have narrowed due to lower sea-levels, allowing safe passage prior to and at the beginning of that last interglacial period. At that time, the Arabian Peninsula was much wetter than today with greater vegetation cover and a network of lakes and rivers. Such a landscape would have allowed early humans access into Arabia and then into the Fertile Crescent and India, according to the researchers.
"Archaeology without ages is like a jigsaw with the interlocking edges removed — you have lots of individual pieces of information but you can't fit them together to produce the big picture," said Armitage. "At Jebel Faya, the ages reveal a fascinating picture in which modern humans migrated out of Africa much earlier than previously thought, helped by global fluctuations in sea-level and climate change in the Arabian Peninsula."
- Ancestors may have left Africa earlier than thoughtfrom LA Times - ScienceFri, 28 Jan 2011, 1:30:17 EST
- Humans left Africa 65,000 years earlier: studyfrom Reuters:ScienceFri, 28 Jan 2011, 0:40:10 EST
- Date of humans out of Africa pushed backfrom UPIThu, 27 Jan 2011, 20:00:31 EST
- Humans May Have Left Africa Earlier than Thoughtfrom CBSNews - ScienceThu, 27 Jan 2011, 19:20:22 EST
- Hints of earlier human exit from Africafrom Sciencenews.orgThu, 27 Jan 2011, 16:50:59 EST
- Modern humans may have left Africa earlier than thoughtfrom MSNBC: ScienceThu, 27 Jan 2011, 16:30:28 EST
- Modern humans reached Arabia earlier than thought, new artifacts suggestfrom Science DailyThu, 27 Jan 2011, 16:11:22 EST
- Early human migration written in stone toolsfrom News @ NatureThu, 27 Jan 2011, 15:30:22 EST
- Humans Left Africa Earlier, During Ice Age Heat Wavefrom National GeographicThu, 27 Jan 2011, 15:20:40 EST
- Humans May Have Left Africa Earlier than Thoughtfrom CBSNews - ScienceThu, 27 Jan 2011, 15:20:39 EST
- Stone tools discovered in Arabia force archaeologists to rethink human historyfrom The Guardian - ScienceThu, 27 Jan 2011, 15:20:09 EST
- Humans 'left Africa much earlier'from BBC News: Science & NatureThu, 27 Jan 2011, 15:00:40 EST
- Humans left Africa 65,000 years earlier: studyfrom Reuters:ScienceThu, 27 Jan 2011, 14:50:20 EST
- Tools Suggest Earlier Human Exit From Africafrom NY Times ScienceThu, 27 Jan 2011, 14:50:09 EST
- Modern humans reached Arabia earlier than thought, new artifacts suggestfrom PhysorgThu, 27 Jan 2011, 14:40:18 EST
- Early human migration written in stone toolsfrom News @ NatureThu, 27 Jan 2011, 14:31:57 EST
- Ancient Arabian Artifacts May Rewrite ‘Out of Africa’ Storyfrom Live ScienceThu, 27 Jan 2011, 14:31:49 EST
- Humans may have left Africa earlier than thoughtfrom AP ScienceThu, 27 Jan 2011, 14:31:45 EST
Latest Science NewsletterGet the latest and most popular science news articles of the week in your Inbox! It's free!
Learn more about
Check out our next project, Biology.Net
From other science news sites
Popular science news articles
- Search technique helps researchers find DNA sequences in minutes rather than days
- NASA data reveals tropical cyclone forming near Madagascar
- Iowa State engineers develop hybrid technology to create biorenewable nylon
- Eye abnormalities in infants with microcephaly associated with Zika virus
- The universe's primordial soup flowing at CERN | http://esciencenews.com/articles/2011/01/27/modern.humans.reached.arabia.earlier.thought.new.artifacts.suggest |
4.09375 | Unit 22: Music Performance Session Styles
You are to recognise 3 genre of music and evaluate that style with the aim of performing that style accurately. P1 Explain the fundamental stylistic elements of a wide range of musical genres. You will understand the stylistic elements across a wide range of musical genres Reggae
Reggae is most easily recognized by the rhythmic accents on the off-beat, usually played by guitar or piano (or both), known as the skank. This pattern accents the second and fourth beat in each bar (or the ands of each beat depending on how the music is counted) and combines with the drums emphasis on beat three to create a unique feel and sense of phrasing in contrast to most other popular genres focus on beat one, the "downbeat" Stylistic elements:
The tempo of reggae is usually felt as slower than the popular Jamaican forms, ska and rocksteady, which preceded it. It is this slower tempo, the guitar/piano off-beats, the emphasis on the third beat, and the use of syncopated, melodic bass lines that differentiates reggae from other music, although other musical styles have incorporated some of these innovations separately. A standard drum kit is generally used in reggae, but the snare drum is often tuned very high to give it a timbales-type sound. Some reggae drummers use an additional timbale or high-tuned snare to get this sound. Cross-stick technique on the snare drum is commonly used, and tom-tom drums are often incorporated into the drumbeat itself. An unusual characteristic of reggae drumming is that the drum fills often do not end with a climactic cymbal. Wide ranges of other percussion instrumentation are used in reggae. Bongos are often used to play free, improvised patterns, with heavy use of African-style cross-rhythms. Cowbells, claves and shakers tend to have more defined roles and a set pattern. The bass guitar often plays the dominant role in reggae, and the drum and bass is often the most important part of what is called, in Jamaican music, a riddim (rhythm), a (usually simple) piece of music that's used repeatedly by different artists to write and record songs with. Literally hundreds of reggae singers have released different songs recorded over the same rhythm. The central role of the bass can be particularly heard in dub music which gives an even bigger role to the drum and bass line, reducing the vocals and other instruments to peripheral roles. The bass sound in reggae is thick and heavy, and equalized so the upper frequencies are removed and the lower frequencies emphasized. The bass line is often a repeated two or four bar riff when simple chord progressions are used. From the late 1960s through to the early 1980s, a piano was often used in reggae to double the rhythm guitar's skank, playing the chords in a staccato style to add body, and playing occasional extra beats, runs and riffs. The piano part was widely taken over by synthesizers during the 1980s, although synthesizers have been used in a peripheral role since the 1970s to play incidental melodies and countermelodies. Larger bands may include either an additional keyboardist, to cover or replace horn and melody lines, or the main keyboardist filling these roles on two or more keyboards. The reggae organ-shuffle is unique to reggae. Typically, a Hammond organ-style sound is used to play chords with a choppy feel. This is known as the bubble. This may be the most difficult reggae keyboard rhythm. The organ bubble can be broken down into 2 basic patterns. In the first, the 8th beats are played with a space-left-right-left-space-left-right-left pattern, where the spaces represent downbeats not played—that and the left-right-left falls on the ee-and-a, or and-2-and if counted at double time. In the second basic pattern, the left hand plays a double chop as described in the guitar section while the right hand plays longer notes on beat 2 (or beat 3 if counted at double time) or a syncopated... | http://www.studymode.com/essays/Unit-22-1280318.html |
4.09375 | The Report of the Expert Panel on Early Reading in Ontario, 2003
Effective classroom instruction in the early grades is key to creating strong, competent readers and to preventing reading difficulties. When a child enters school, it is the teacher's role to provide effective reading instruction. Although many others share responsibility for creating a supportive learning environment, it is the teacher who has the greatest opportunity and most direct responsibility for providing the instruction that inspires and enables the child to become a lifelong reader.
In the past 30 years, much research has been conducted on how children learn to read and on the most effective strategies for supporting reading achievement. Recently there has been a convergence of evidence about the knowledge, skills, and supports that children need to become proficient readers and about how to deliver these in the classroom. With this evidence to inform their practices, teachers can now be better equipped than ever to plan and deliver effective reading instruction, and to involve the whole school, the home, and the community in helping every child become a successful reader by the end of Grade 3.
The foundations of good reading are the same for all children, regardless of their gender, background, or special learning needs. All children use the same processes in learning to read. Some will need more help than others and may need more instruction in one reading skill than another, but all children must ultimately master the same basic skills for fluency and comprehension.
The focus of this report is on reading instruction in primary classrooms, but reading does not happen in isolation. The three strands of the language curriculum – oral and visual communication, reading, and writing – are interwoven. Oral language is the basis for literacy development, particularly in the early primary years. Children need oral language and writing skills in order to be proficient in reading; conversely, they need to be proficient readers in order to further develop their oral language and writing skills. Although instructional strategies for oral language and writing are not discussed in detail here, they are essential for teaching children to read. They need to be integrated in all subject areas and encouraged at every opportunity.
This section of the report outlines the essential, interactive components of effective reading instruction. It addresses the following: the goals of reading instruction; knowledge and skills that children need to become effective readers; instruction; and assessment, evaluation, and reporting.
The Framework for Effective Early Reading Instruction (figure 1) reminds teachers to include all of these components in their classroom reading programs to ensure that their students become successful readers and achieve the expectations of the Ontario language curriculum. All of the components are important, but the degree of emphasis on specific knowledge and skills will depend on the child's age, grade, and stage of reading development.
Figure 1. A Framework for Effective Early Reading Instruction
Reading is the process of constructing meaning from a written text. Effective early reading instruction enables all children to become fluent readers who comprehend what they are reading, can apply and communicate their knowledge and skills in new contexts, and have a strong motivation to read.
The framework in figure 1 identifies three main goals for reading instruction:
Fluency is the ability to identify words accurately and read text quickly with good expression. Fluency comes from practice in reading easy books about familiar subjects. These texts primarily contain familiar, high-frequency words so that the child will encounter few unfamiliar words. As children develop fluency, they improve in their ability to read more expressively, with proper phrasing, thus gaining more of the text's meaning.
Comprehension is the ability to understand, reflect on, and learn from text. To ensure that children develop comprehension skills, effective reading instruction builds on their prior knowledge and experience, language skills, and higher-level thinking.
Motivation to read is the essential element for actively engaging children in the reading process. It is the fuel that lights the fire and keeps it burning. Children need to be immersed in a literacy-rich environment, filled with books, poems, pictures, charts, and other resources that capture their interest and make them want to read for information and pleasure.
These three goals are interconnected, and the strategies for achieving them work together synergistically.
Children need to learn a variety of skills and strategies in order to become proficient readers. In the earliest stages, they need to understand what reading is about and how it works – that what can be spoken can also be written down and read by someone else. Some children will have already grasped the basic concepts before entering school, but many will need explicit instruction to set the context for reading. When children first experience formal reading instruction in school, they need to learn specific things about oral language, letters, and words. They need to understand how print works, and be able to connect print with the sounds and words in oral language. Once they can demonstrate these skills, the emphasis shifts to developing fluency. Fluency at this level involves recognizing words in text quickly and without effort. This will allow the children to read with increasing enjoyment and understanding. Fluency is critical if they are to move from learning to read to reading to learn. The role of primary teachers, working as a team, is to move children from the earliest awareness of print to the reading-to-learn stage, where they will become independent, successful, and motivated readers.
According to research, the knowledge and skills that children need in order to read with fluency and comprehension include: oral language; prior knowledge and experience; concepts about print; phonemic awareness; letter-sound relationships; vocabulary; semantics and syntax; metacognition; and higher-order thinking skills. These are not isolated concepts taught in a lock-step sequence; they are interrelated components that support and build on each other.
Children come to reading with considerable oral language experience. They acquire most of what they know about oral language by listening and speaking with others, including their families, peers, and teachers. Through experience with oral language, children build the vocabulary, semantic knowledge (awareness of meaning), and syntactic knowledge (awareness of structure) that form a foundation for reading and writing. Children who are proficient in oral language have a solid beginning for reading. This knowledge allows them to identify words accurately and to predict and interpret what the written language says and means.
Not all children begin school with a solid foundation in oral language. Some children come from language-impoverished backgrounds where they have little opportunity to develop a rich vocabulary and complex language structures. These children may or may not be native speakers of English or French. Other children have a history of speech and language difficulties and may have smaller vocabularies and less mature grammar than their peers. Children with mild hearing impairment may find it difficult to make fine distinctions between similar speech sounds. These children require instruction that increases their oral language abilities (including phonemic awareness, vocabulary, listening comprehension, and the oral expression of ideas) in conjunction with reading skills.
It is important to remember that, although some children who speak a first language or dialect that is different from the language of instruction may begin school with a limited vocabulary in the language of instruction, they may have strong conceptual knowledge and a rich language foundation on which to build fluency and comprehension in their new language. The key for these children is to provide support for building strong bridges from the known to the new.
For the benefit of all children, teachers should constantly model language structures that are more elaborate and varied than the ones children use outside of school, and should engage the children in using these structures and variations for themselves. Children need frequent opportunities to ask and answer questions, participate in discussions, and classify information in order to develop their capacity for higher-order, critical thinking.
The importance of oral language as a foundation for reading has significant implications in the French-language school system. Because French is used by a minority of Ontarians, children have limited opportunities to hear and speak it outside of school. For some children, school is the only place where French is used systematically. It is therefore imperative that the school provide an environment where children can experience the language in a living way. Children must have many opportunities to speak French, both in the classroom and during extracurricular activities. By allowing time for in-class discussions and by providing a rich vocabulary, teachers help children to develop their fluency in French.
In order that children may understand what they are reading, it is important that they come to the text with a variety of experiences that will allow them to appreciate the concepts embedded in the text. These experiences enable them to anticipate the content, and such anticipation leads to easier decoding of the text and deeper understanding of its meaning.
Prior knowledge and experience refers to the world of understanding that children bring to school. Research on the early stages of learning indicates that children begin to make sense of their world at a very young age. In many parts of Ontario, children enter school from a variety of countries and cultures. Thus their prior knowledge and experiences may differ considerably from those of their classmates and teachers, and they may find it difficult to relate to the context and content of the resources generally used in Ontario classrooms. On the other hand, they may have a wealth of knowledge and experiences that can enhance the learning of their classmates. Teachers need to be aware of children's backgrounds, cultures, and experiences in order to provide appropriate instruction. By creating rich opportunities for all children to share prior knowledge and related experiences, teachers will engage the interest of children from various backgrounds and ensure that they will better understand what they read.
When children first encounter print, they are not aware that the symbols on the page represent spoken language or that they convey meaning. The term concepts about print refers to awareness of how language is conveyed in print. These concepts include: directionality (knowing that English or French text is read from left to right and top to bottom); differences between letters and words (words are made of letters, and there are spaces between words); awareness of capitalization and punctuation; diacritic signs (e.g., accents in French); and common characteristics of books (such as the front/back, title, and author).
Young children can be taught these concepts by interacting with and observing experienced readers (including teachers and family members) who draw their attention to print and give them opportunities to demonstrate their understanding of the concepts. Teachers need to provide children with a variety of printed materials for practice, including books, big books, charts, and environmental print (such as signs and labels).
Children need to learn that the words we say are made up of sounds. This understanding is called phonemic awareness. Research has confirmed that phonemic awareness is a crucial foundation for word identification. Phonemic awareness helps children learn to read; without it, children struggle and continue to have reading difficulties. The evidence also shows that phonemic awareness can be taught and that the teacher's role in the development of phonemic awareness is essential for most children.
Phonemic awareness and letter-sound knowledge account for more of the variation in early reading and spelling success than general intelligence, overall maturity level, or listening comprehension (National Reading Panel, 2000). They are the basis for learning an alphabetic writing system. (Learning First Alliance, 2000, p. 14)
Children who have phonemic awareness are able to identify and manipulate the individual sounds in oral language. They demonstrate this, for example, in recognizing that the spoken word "ship" consists of three distinct sounds: sh + i + p. In English there are about 44 speech sounds and in French 36. The number of individual speech sounds in other languages varies. In learning a second language, children may encounter speech sounds that do not exist in their home language, and so they may need more time to develop phonemic awareness in the language of instruction.
In order for children to develop phonemic awareness, teachers need to engage them in playing with and manipulating the sounds of language. This can be accomplished through songs, rhymes, and activities that require children to blend individual sounds together to form words in their heads, and by breaking words they hear into their constituent sounds. Blending and segmentation of speech sounds in oral language provide an essential foundation for reading and writing. Phonemic awareness prepares children for decoding and encoding the sounds of the language in print.
Building on the foundation of phonemic awareness and concepts about print, children are ready to understand that there is a way to connect the sounds they hear with the print on the page in order to make meaning. In both the English and French writing systems, one letter may not necessarily represent one single sound, and so it is important that children receive systematic and explicit instruction about correspondences between the speech sounds and individual letters and groups of letters.
Phonics instruction teaches children the relationships between the letters (graphemes) of written language and the individual sounds (phonemes) of spoken language. Research has shown that systematic and explicit phonics instruction is the most effective way to develop childrens' ability to identify words in print.
Children need a broad vocabulary of words that they understand and can use correctly to label their knowledge and experiences. The breadth and depth of a child's vocabulary provide the foundation for successful comprehension. Oral vocabulary refers to words that are used in speaking or recognized in listening. Reading vocabulary refers to words that are recognized or used in print.
Vocabulary development involves coming to understand unfamiliar words and being able to use them appropriately. It is a huge challenge for children to read words that are not already part of their oral vocabulary. To develop their students' vocabulary, teachers need to model how to use a variety of strategies in order to understand what words mean (e.g., using the surrounding context, or using smaller, meaningful parts of words, such as prefixes or suffixes). Good teaching includes selecting material for reading aloud that will expand children's oral vocabulary, and providing opportunities for children to see and use new reading vocabulary in different contexts. Recent research on vocabulary instruction indicates that children learn most of their vocabulary indirectly by engaging daily in oral language, listening to adults read to them, and reading extensively on their own. Research also shows that some vocabulary must be taught directly. This can be done by introducing specific words before reading, providing opportunities for active engagement with new words, and repeating exposure to the vocabulary in many contexts.
Even children who have a very extensive oral vocabulary may have great difficulty reading words in print because they have a small reading vocabulary. The reading vocabulary – often referred to as sight vocabulary – is determined mainly by how many times a child has seen those words in print. Children who read a lot have a large pool of words they recognize immediately on sight; children who do little reading have a limited sight vocabulary. To increase their students' sight vocabularies so they can recognize a large proportion of the words in print, teachers need to focus their instruction and practice on the most commonly used words in the language.
Although words alone carry meaning, reading for the most part involves the deciphering of phrases and sentences, which depends on both the words and how those words are organized. Therefore, it is important to spend instructional time not only on the meanings of individual words but also on the meanings of phrases and complete sentences. 1
Semantics refers to meaning in language, including the meaning of words, phrases, and sentences. Syntax refers to the predictable structure of a language and the ways that words are combined to form phrases, clauses, and sentences. Syntax includes classes of words (such as noun, verb, and adjective) and their functions (such as subject and object). Semantic and syntactic knowledge are important because they help children to identify words in context and lead to deeper levels of comprehension. Beginning readers may not need to be able to define noun or verb, but they need to understand that a word (like "snow") can represent a thing or an action, depending on the context. Providing this explicit understanding can be especially important for children whose first language is not the language of instruction.
Teachers need to model correct sentence structures so that children can learn to anticipate these structures when reading print. Opportunities should be provided for children to become familiar with and use the specific terminology for basic parts of speech (e.g., noun, verb, adjective, adverb) to facilitate instruction. Teachers also need to familiarize children with a variety of language structures and encourage their use of longer, more complex sentences.
Pragmatics, which is introduced in the later primary years, is the study of how people choose what they say or write from the range of possibilities available in the language, and how listeners or readers are affected by those choices. Pragmatics involves understanding how the context influences the way sentences convey information. A sentence can have different purposes depending on the situation or context in which it is used. It can be a mere statement or affirmation, but it can also be a warning, a promise, a threat, or something else. Readers with pragmatic knowledge and skills are able to decipher these different intents from the context.
Teachers in the later primary years need to show children how to use context clues that surround an unfamiliar word to help figure out the word's meaning. Because children learn most word meanings indirectly, or from context, it is important that they learn to use context clues effectively. However, context clues alone are not enough; the teacher will need to teach other word-meaning strategies to develop the child's ability to learn new words.
Comprehension is the reason for reading. If readers can identify the words but do not understand what they are reading, they have not achieved the goal of reading comprehension. To gain a good understanding of the text, children must bring to it the foundational knowledge and skills of oral language, prior knowledge and experience, concepts about print, phonemic awareness, letter-sound relationships, vocabulary, semantics, and syntax. They must integrate what they bring to the text with the text itself. In order to read to learn, children need to use problem-solving, thinking processes. They must reflect on what they know and need to know (metacognition) and draw on a variety of comprehension strategies to make sense of what they read.
Good readers plan and monitor their reading at a metacognitive level. What they are doing is thinking about the strategies they need to make sense of the text. When they run into difficulty, they evaluate their reading to determine the best strategy for improving their understanding of the text. Children who read at a metacognitive level know the strategies that affect their own reading (e.g., decoding hard words, connecting text with prior experiences, understanding word meanings, identifying main ideas, drawing inferences from the text, and synthesizing information). These children use a variety of strategies to decode and understand text and to know when and why to apply particular strategies (e.g., knowing they do not need to use a phonics strategy to identify a word they already know by sight). Their understanding of the text extends beyond the literal.
Teachers play an important role in modelling how to think metacognitively to help children figure out what they know and what they need to know. Comprehension strategies are conscious plans that readers use to make sense of the text. Research has pointed to some effective comprehension strategies that teachers can use to help children gain meaning from the text. These include teaching children to ask questions such as those found in table 1.
|Question||Purpose of the question|
|How does this connect with what I already know?||activating relevant, prior knowledge before, during, and after reading|
|What pictures does this text create in my mind?||creating visual and other sensory images from text during and after reading|
|How can I use the pictures and the text to help me understand?||drawing inferences from the text to form conclusions, make critical judgements, and create unique interpretations|
|What are the most important ideas and themes in the text?||using the main ideas to provide clues about meaning|
|How can I say this in my own words?||synthesizing what they read|
|Does this make sense?||monitoring comprehension|
|Why did the author write this?||exploring the author's intent|
|How is this text like other texts that I have read?||finding clues in the text's structure|
The development of higher-order thinking skills is essential throughout the primary grades. In the early stages of reading development, higher-order thinking can be developed at the oral level through teacher read-alouds and shared reading. In the reading-to-learn stage, classroom teachers need to ask children questions that challenge them to move beyond what they recall of the text and on to what they understand through application, analysis, synthesis, and evaluation. Children need to have opportunities to manipulate and criticize the concepts and understandings of what they have read. Children will formulate opinions and substantiate their thinking. They are no longer simply passive readers.
Bloom's taxonomy is a useful tool for helping teachers engage children in higher-order thinking when they read. 2 Table 2 shows that, as children apply higher-order thinking, they are able to draw more meaning from what they learn and apply the learning in more sophisticated ways. Although thinking skills alone do not make a child an effective reader, they are essential forreading. Higher-order thinking is what enables children to achieve the provincial standard for reading, which is level 3 in the Ontario curriculum.
|Level||Definition||What the Student Will Do:|
|Evaluation||Judging the value of ideas, materials, or products||Give value. Make choices. Arrange ideas. Judge ideas. Present choices.|
|Synthesis||Putting together constituent parts or elements to form a new whole||Use prior knowledge to activate new knowledge. Change existing ideas. Create new ideas.|
|Analysis||Breaking down an idea into its constituent parts||Look at parts. See relationships. Organize parts.|
|Application||Using information in new situations or to solve a new problem. Uses knowledge.||Apply previously learned information to another situation.|
|Comprehension||Understanding the information being communicated but not relating it to other material or ideas||Organize previously learned material in order to rephrase it, describe it in own words, explain it, or predict implications or effects on the basis of the known facts.|
|Knowledge (memory)||Learning the information||Recall or recognize bits of information.|
Read-aloud, shared reading, guided reading, guided comprehension, independent reading, phonics, and word study provide instruction that gives children the opportunity to experience and enjoy authentic texts and to practise the skills and strategies necessary for fluency and comprehension.
No single skill in this complex interaction is sufficient on its own, and the teacher must be careful not to overemphasize one skill at the expense of others.
Reading is a meaning-making process that involves a great deal of thinking, problem solving, and decision making by both the teacher and the child. Comprehensive reading instruction teaches the child to use a variety of skills to decode, read fluently, and understand the text. No single skill in this complex interaction is sufficient on its own, and the teacher must be careful not to overemphasize one skill at the expense of others. It is important that teachers understand the interdependent nature of the skills being taught, and that competent readers integrate all sources of information as they engage in reading meaningful texts.
The teacher should provide children with planned activities for before, during, and after reading. For example:
Before beginning to read, the teacher and students establish the purpose for reading. Together they consider what they already know about the topic or genre and use the title, headings, table of contents or index, and new, unfamiliar vocabulary to enhance their predictions.
During reading, the students respond to the text by searching for meaning, identifying the main ideas, predicting and verifying predictions, and building a coherent interpretation of the text. Students bring their experiences of the world and literature into the reading activity. The teacher directs the attention of students to subtleties in the text, points out challenging words and ideas, and identifies problems and encourages the students to predict solutions.
After reading, the students reflect on their learning as they apply the knowledge acquired during reading, or transfer that knowledge to other contexts (e.g., by retelling, summarizing, creating graphic organizers, or putting pictures in sequential order).
With all of this instruction, the teacher provides continuous role modelling, coaching, guiding, and feedback, and is always building on the children's prior knowledge and experiences. The teacher also ensures that children are focused and engaged in the reading process, and monitors their time on task.
Research has shown that phonics and word study are valuable strategies for improving children's ability to recognize words and decode text. Although these skills alone are not enough, they are essential building blocks for becoming an effective reader. They may be taught out of context but must be practised in authentic contexts, and reading material that is engaging and meaningful for the children should be used.
Phonics is a systematic instructional approach that links the foundation of phonemic awareness with children's growing knowledge of letter-sound relationships to enable children to decode words and read. Instruction begins with the most common and more easily discerned letter-sound relationships and progresses to more complex spelling patterns, which include larger chunks of words, such as syllables. Teachers need to introduce the letter-sound correspondences in a planned, sequential manner so that children have time to learn, practise, and master them. Letter formation is a part of phonics instruction that reinforces children's memory for letter-sound correspondences. To understand the usefulness of letter-sound correspondences and letter formation, children need to apply their knowledge by seeing, saying, and printing words in interesting and authentic contexts.
Word study gives children the opportunity to practise high-frequency words so that they can read them automatically (word identification), and to learn word-solving strategies so that they will be able to read partially familiar or unfamiliar words (word knowledge). Word study improves the child's ability to decode words independently, which is important for both fluency and comprehension. The teacher provides the children with an organized environment that includes charts, lists, word walls, and other resources. Activities can involve the whole class, small groups, or children working independently, and may include: searching for big words or mystery words; recognizing whole words, word parts, root words, and compound words; adding prefixes and suffixes; using known words to get to unknown words; and recognizing letter patterns.
To become fluent readers, children need to be able to read high-frequency words automatically. The most common words in texts include articles, pronouns, prepositions, conjunctions, and everyday verbs such as to be and to have. The strategies for teaching these words are different from the strategies for teaching more engaging but less frequent words, such as the names of people and the words for colours and interesting concepts. A word like dinosaur, for example, represents an interesting idea, and so children are more likely to remember it and recognize it when they see it in print.
Lists of grade-appropriate sight words should be used to guide instruction. Sight words need to be selected for their frequency of occurrence in print. Teachers need to expose children regularly to these most common words and give children plenty of meaningful practice in reading them in well-written books on engaging topics, so that children are able to recognize the words instantly by sight. If teachers provide enough opportunities for practice, children will develop the ability to read many sight words that are phonetically irregular, and will have mastered a large proportion of the words they will encounter in books.
In read-aloud(s) the teacher reads to the whole class or to a small group, using material that is at the listening comprehension level of the children. The content may focus on a topic related to a curriculum expectation in another subject area, such as mathematics, science, or social studies.
Reading aloud to children helps them to develop a love of good literature, motivation to pursue reading on their own, and familiarity with a variety of genres, including non-fiction. It provides them with new vocabulary, exposes them to a variety of literature, and contributes to their oral and written language development. Reading aloud should occur every day in the early stage of reading instruction to stimulate the children's interest in books and reading.
In shared reading the teacher guides the whole class or a small group in reading enlarged text that all the children can see – for example, a big book, an overhead, a chart, a poster, or a book. The text can be read several times, first for the children and then with the children joining in. Shared reading involves active participation and considerable interaction on the part of students and teachers. It is both enjoyable and motivating for children. The teacher takes into account the difficulty of the text and the skills, knowledge, and experiences of the children in structuring this activity.
Shared reading provides the teacher with the opportunity to model effective reading; promote listening comprehension; teach vocabulary; reinforce concepts about books and print and letter-sound relationships; and build background knowledge on a range of subjects.
Shared reading provides a bridge to guided reading. It should occur daily in the early stages of reading instruction and less frequently in later stages.
Guided reading is a small-group, teacher-directed activity. It involves using carefully selected books at the children's instructional level. The teacher supports a small group of children as they talk, read, and think their way through a text. Children can be grouped for guided reading by reading ability or specific instructional goals. The group composition is fluid and changes according to the teacher's observations and assessments.
Guided reading provides opportunities to integrate children's growing knowledge of the conventions of print, of letter-sound relationships, and of other foundational skills in context. Through modelling and instruction, guided reading enables teachers to extend children's vocabulary development and their knowledge and use of appropriate comprehension strategies. It gives the teacher the opportunity to observe reading behaviours, identify areas of need, and allow children to develop more independence and confidence as they practise and consolidate reading behaviours and skills.
Guided reading provides a bridge to independent reading and can help children develop the necessary higher-order thinking skills.
Children learn comprehension skills in a variety of situations, using many levels of texts and different text types. The focus of guided comprehension is on direction, instruction, application, and reflection.
Focused instruction in comprehension skills – such as previewing; self-questioning; making links to self, text, and others; visualizing; using graphophonic, syntactic, and semantic cueing systems; monitoring, summarizing, and evaluating – is provided first. The children then apply the comprehension strategies in teacher-guided small groups and student-facilitated comprehension activities, such as literature circles, questioning the author, or reciprocal teaching.
Children work with varying degrees of support and use texts at their instructional level and independent level of reading. The teacher and the children reflect on performance, share experiences, and set new goals for learning. The levelled texts and the organization of the small group will change as the children's knowledge and reading skills increase.
During purposeful and planned independent reading, the children choose their own books according to their interest and ability. The text should be chosen carefully so that each child can read with a high degree of success. Children can be taught to select appropriate independent reading material and can share this task with the teacher. Emergent readers can use this independent reading time to practise reading small, predictable stories, as well as books that have been used in shared and guided reading.
When teachers plan independent reading for children, they need to provide children with time to engage in discussion and reflection. Independent reading is preceded and followed by discussion and dialogue with the teacher and/or peers. The teacher is always observing, listening, and gathering information about the children's reading behaviour.
Purposeful and planned independent reading provides opportunities for children to build self-confidence, reinforce skill development, enhance fluency, build memory for language structures and vocabulary, and promote comprehension and the motivation to read. In addition, independent reading gives children time to get more information about a specific subject of interest.
It is important to note that the American National Reading Panel, in Put Reading First, their comprehensive meta-analysis of reading research, found considerable evidence to support having children read aloud with guidance and feedback, but no evidence to confirm that instructional time spent on silent independent reading with minimal guidance and feedback improves reading fluency and overall reading achievement (Center for the Improvement of Early Reading Achievement [CIERA], 2001, p. 25). This does not mean that teachers should abandon independent reading in the classroom, but they should use texts that match the child's independent reading level and ensure that each child receives feedback (from the teacher, a peer, or a volunteer) to enhance fluency, comprehension, and the motivation to read. These practices help children to decode with increasing fluency and comprehension.
Assessment begins with what children know; the evidence for what they know is in what they can do. (Fountas and Pinnell, 1996, p. 73)
There is a direct and continuous link between teaching and assessment. Ongoing assessment must be frequent, well-planned, and organized, so that teachers are able to help each child move towards his or her full potential in reading. Assessment often involves techniques that teachers already use, such as observations and checklists. Knowing the developmental stages of reading, the associated reading skills, and the components and strategies of effective reading instruction helps the teacher to administer the right assessment and evaluation tools and interpret the results correctly. This knowledge, together with the assessment data, enables teachers to provide differentiated instruction in order to ensure the best learning opportunities for all children, through direct, explicit instruction – either in large groups, in small groups, or at the individual level, depending on the children's needs. Timely assessment is also important for identifying the small percentage of children who cannot be adequately served by good classroom instruction and who will need interventions and extra support to help them acquire the knowledge and skills for reading.
Instead of teaching in a whole-class fashion to a hypothetical average student, we need to take into account the range of development within our classrooms, designing a curriculum that meets all our children where they are and takes each child further. Our classroom-based system of assessment should wreak havoc with any instructional plan that doesn't allow us the elasticity and breadth necessary to teach the full range of readers. Our assessments should nudge us, as teachers, to look at all our children and their work, and to look at ourselves and our work. (Calkins, 2001, p. 157)
Assessment includes gathering, recording, and analysing information about a child's knowledge and skills and, where appropriate, providing descriptive feedback to help the child improve. (Assessment is different from evaluation, which involves making an informed judgement about a child's achievement at a point in time.)
Diagnostic assessment occurs before reading instruction begins so that the child's prior learning and current reading level can be identified and instructional priorities for the child can be determined. Diagnostic assessment can inform the teacher about detailed strategies that the child uses in the reading process. On the basis of this structured observation of the child's progress, the teacher plans the next steps in learning. Diagnostic reading tools include running records, observation surveys, cloze texts, miscue analysis, and retells.
Formative assessment occurs on an ongoing basis to track the child's progress towards achievement targets. It is formative in the sense that it provides information about learning that is still forming or in progress. The child may receive the feedback immediately or at a specific stage in the learning process. Formative assessment helps the teacher to make programming decisions, such as whether and how to adapt instruction to meet the needs of specific children. The majority of assessment time is spent on formative assessment. Resources include teacher observations, student portfolios, student logs, and self-reflection activities.
Summative assessment occurs at the end of a learning module or specific time period. Its purpose is to provide information needed to make judgements (evaluations) about student understandings. The tools for summative assessment include tests and performance-based tasks.
Young children show their understanding by doing, showing, and telling. Assessment strategies need to capture this doing, showing, and telling by watching, listening, and probing. Hence, observation is an integral part of all other assessment strategies. Reading assessments should not generally require the child to use writing strategies.
Table 3 gives examples of assessment strategies that can help a teacher to assess specific reading skills. Some of these strategies, such as running records, miscue analysis, and cloze procedure, are described in the ministry's curriculum documents.
|Phonics and Word Study||Read-Aloud|
|What Can Be Assessed?
|Assessment Tools and Strategies
Tools and Strategies
|Shared Reading||Guided Reading|
|What Can Be Assessed?
||What Can Be Assessed?
|Assessment Tools and Strategies
|Guided Comprehension||Independent Reading|
|What Can Be Assessed?
|Assessment Tools and Strategies
Tools and Strategies
The Kindergarten curriculum identifies ten expectations for reading, but does not distinguish categories or levels of achievement. (See The Kindergarten Program [Ontario Ministry of Education, 1998, pp. 14–15].) For Grades 1 to 3, the expectations become more specific. Teachers assess children not only for individual reading skills, such as phonemic awareness, concepts about print, and vocabulary, but also according to the four categories of achievement from the Ontario language curriculum, which are reasoning, communication, organization of ideas, and application of language conventions.
Evaluation is an informed judgement about the quality of a child's work at a point in time. For children in Kindergarten, the evaluation is largely a description of what the teacher has observed in the classroom. The teacher assigns a value (level, mark, comment) that represents the child's achievement of the curriculum expectations, using the reading exemples and rubrics produced by the Ministry of Education as a guide to ensure consistency.
Reporting relates to the communication of accurate, comprehensive, and timely information about student achievement to parents, students, and/or other educators. One tool for this is the provincial report card, which students and their families receive three times per year, starting in Grade 1. However, the report card is only one of many ways that teachers can communicate results to children and parents. For Kindergarten children, as with all primary children, reporting should be ongoing and should include a variety of formal and informal methods, ranging from formal written reports and discussions with parents and the child to informal notes to parents and conversations with them. (See the Guide to the Provincial Report Card [Ontario Ministry of Education, 1998].)
Reporting provides an opportunity to involve the parents in helping their child to progress as a reader. For reporting to be effective, the teacher must be able to clearly explain the results and next steps. Teachers should discuss specific recommendations for helping the child to reach the provincial standard of level 3. Suggestions might include strategies for individual, classroom, or home-school support.
The Framework for Effective Early Reading Instruction (on page 12) lists several practices that support reading achievement in young children. They create the conditions for teachers to provide focused, explicit instruction that addresses the specific needs of individual children and groups of children. These practices are woven throughout the report and include:
a balance of direct instruction, guided instruction, independent learning, and practice
large group, small group, and individual instruction, discussion, and collaboration
a variety of assessment and evaluation techniques to inform program planning and instruction
the integration of phonics and word study in reading, writing, and oral language instruction
an uninterrupted literacy block each day
parental and community involvement
high-quality literature and levelled texts
a variety of genres, narratives, informational texts, and electronic media
authentic and motivating literacy experiences and learning activities
interventions for children who are at risk of not learning to read
a supportive classroom culture and environment that promotes higher-order thinking skills
guidance, coaching, and feedback for children
effective classroom organization and management
1. In French, the written language differs from the oral language, and this difference can have an impact on reading. Certain alphabetic symbols may be present in writing but not be pronounced (e.g., in ils marchent).
2. Bloom's taxonomy is a widely used way of classifying educational objectives, developed in the 1950s by a group of researchers headed by Benjamin Bloom of the University of Chicago. | http://www.edu.gov.on.ca/eng/document/reports/reading/effective.html |
4.375 | Rearrange the groups from the previous day.
Say to students, "In the previous lesson, you determined the
placement of frets on a stringed instrument. Today, you will decide
where to put frets on an instrument." Many stringed instruments, such
as violins, do not contain frets. Musicians play these instruments by
sliding their fingers up and down the strings to the appropriate spots,
but there are no frets to guide them. For this activity, explain to
students that they will be determining where the frets would be (i.e.,
where a musician should place his or her fingers).
If available, have each group of students work with a fretless
instrument. (You might wish to borrow instruments from the music
department for this activity.) Alternatively, you can hand out the Not To Fret Activity Sheet, which shows the neck of a fretless stringed instrument.
The sheet shows the location of the nut and the 12th fret, and students
are to determine the placement of the 1st through 11th frets for this
Not To Fret Activity Sheet
Students should remember that the distance from the nut to the
12th fret is half the distance from the nut to the bridge. On the
activity sheet, the distance from the nut to 12th fret is about 19 cm,
so the distance from the nut to the bridge is 38 cm. That means that
the 1st fret will occur at 38 × 2-1/12 ≈ 38 × 0.9439 = 35.9 cm
from the bridge, or 38 − 35.87 = 2.13 cm from the nut. The same process
can be used to find the 2nd fret (that is, multiply 35.87 × 0.9439, and
then subtract the result from 35.87), but a more insightful solution is
this: students should realize that the distances between frets form a
geometric sequence, too. Therefore, if the distance from the nut to the
1st fret is 2.13 cm, then the distance from the 1st fret to the
2nd fret is approximately 2.13 × 0.9439 = 2.01 cm. By continually
multiplying by 0.9439, the successive distances between frets can be
found. On a calculator, this can be accomplished easily by multiplying
the previous answer by 0.9439 and then repeatedly hitting the Enter key.
If students are using the activity sheet, they should indicate
the location of the frets by drawing them. If using an actual
instrument, students should use chalk or masking tape to indicate the
location of the frets, to prevent damage to the instrument.
The following form can be used to determine the fret placement
on various instruments. It is currently set with a scale length of 38
to match the instrument depicted on the Not To Fret Activity Sheet, but its value may be changed. Additionally, the number of frets can be changed from 12, but the maximum is 30.
Distribute the Placing Frets Activity Sheet, which allows students to consider the two types of
curves—discrete and continuous—that surfaced during this lesson.
Placing Frets Activity Sheet
As a class, discuss which letters in the equations y = arn and y = arx
are variables and which or constants. This is an opportunity to deepen
student understanding of variables beyond being just a letter that
represents a number. In this case, y is the length of string
from a fret or finger position to the bridge; as such, it varies as the
finger position changes. Students should clearly see that r is a
constant since that ratio is the same (approximately) regardless of the
instrument or from which adjacent lengths it is based. Students may
have a hard time recognizing that a is a constant because a changes from instrument to instrument; however, for a particular instrument, the value of a does not change as the finger position changes, and it represents the length from nut to bridge. The value of n or x relates to a particular fret or finger position, so they are variables.
To conclude the lesson, display the last page of the Overheads,
which contains a set of summary questions. Allow students to answer
these questions as well as to ask any questions that they may have.
Observe student conversation about continuous versus discrete graphs.
Are they connecting the "slide effect" possible on a violin with
continuous changes in x in the equation y = arx and the numbered frets as related to the values for n (n = 0, 1, 2, 3, …) in the equation y = arn?
Question for Students
Which type of stringed instrument, fretted or fretless, gives a musician more flexibility in playing accurate pitches, if the instrument itself is out of tune?
[With a fretless instrument, a musician slides his or her fingers to move from note to note. Because the finger position is decided by distance from other notes and is not dictated by fret position, a musician may be more likely to play correct pitches on a fretless instrument.]
- Did students understand the differences between continuous and discrete
functions? If not, what could be done to make the differences more | http://illuminations.nctm.org/Lesson.aspx?id=1947 |
4.0625 | A volcano erupting.
- Plate Volcanoes - The majority of volcanoes are formed when two of the Earth’s plates meet and collide. These volcanoes actually occur on the ocean floor.
- Shield Volcanoes - Shield volcanoes are extremely broad and flat when compared to other volcanoes.
- Composite Volcanoes - Composite volcanoes, also known as strato-volcanoes, are formed by alternate layers of rock fragments and lava. The shape of a composite volcano is large and cone-like.
- Caldera Volcanoes - Caldera volcanoes are formed from considerable amounts of magma erupting from sub-surface magma chambers. When the magma erupts, it leaves an empty space below the surface. The eruption of a caldera volcano generally has the coolest lava; but, they are the most dangerous because their eruption might also cause tsunamis, large pyroclastic surges, and widespread falling of ash.
- Decade Volcanoes - These volcanoes are sixteen volcanoes that have been identified by scientists as noteworthy due to their large eruptions, and their closeness to populated areas. They include: Avachinsky-Koryaksky in Russia, Nevado de Colima in Mexico, Mount Etna in Italy, Galeras in Colombia, Mauna Loa in the United States, Mount Merapa in Indonesia, Mount Nyiragongo in Africa, Mount Rainer in the United States, Sakurajima in Japan, Santa Maria in Guatemala, Santorini in Greece, Taal Volcano in the Philippines, Teide in Spain, Ulawun in New Britain, Mount Unzen in Japan, and Mount Vesuvius in Italy.
- Pressure builds in a volcano until the pressure must be expelled. The liquid and heat build up and force the lighter, melted rock buried deep below to the surface toward the surface of the Earth, causing an eruption.
- Natural radioactive decay that occurs within the Earth causes a large amount of heat to be produced, which causes more rocks to melt into magma which travels towards the surface.
- A high and low pressure disturbance cause the magma to rise to the surface and spill over the top.
- Most eruptions occur when gas expands inside the Earth, reducing pressure and causing aggressive volcanic behavior.
- Expelled magma on the surface of the Earth can take up to several hundred years to cool depending on its composition and location.
- The molten lava that flows down the side of a volcano is composed of a mixture of gases, liquid rock, silica and crystals.
- The rock element of the magma is categorized as either Rhyolite, Andesite, or Basalt.
- A major area of volcanic activity is called the "Ring of Fire" which extends around the Pacific Plate from Alaska down both sides of the Pacific Ocean, around Australia and down to the Antarctic continent.
- An example of a volcano is Mount St. Helens in Washington state in the U.S.
- An example of a volcano is the eruption of Krakatau in 1883 and the eruption of Mount Tambora in 1815, the two largest explosive and destructive volcanoes since the 1800s.
The definition of a volcano is a rupture in the Earth's crust where molten lava, hot ash, and gases from below the Earth’s crust escape into the air.
Types of Volcanos
If the amount of magma is significant enough, then the magma rises above the surface of the ocean. This is known as an island. When the two plates collide and one plate forces the other plate beneath it, a different reaction occurs.
If this happens, then the friction that is caused during this reaction makes the plate melt that is beneath the other plate. This then causes magma to rise up, and this creates a volcano. The volcanoes that form by this method are usually the most dangerous and the most volatile ones.
Their shape is created by a significant amount of lava running down the surface of the volcano, and then cooling. The eruptions of shield volcanoes aren’t as severe as other volcanoes. When a shield volcano erupts, gases escape and the lava rise to the surface to gently flow down the sides of the volcano.
Facts About Volcanoes
- a vent in the earth's crust through which molten rock (lava), rock fragments, gases, ashes, etc. are ejected from the earth's interior: a volcano is active while erupting, dormant during a long period of inactivity, or extinct when all activity has finally ceased
- a cone-shaped hill or mountain, wholly or chiefly of volcanic materials, built up around the vent, usually so as to form a crater
Origin of volcanoItalian ; from Classical Latin Volcanus, Vulcan
nounpl. vol·ca·noes or vol·ca·nos
- a. An opening in the earth's crust from which lava, ash, and hot gases flow or are ejected during an eruption.b. A similar opening on the surface of another planet.
- A usually cone-shaped mountain formed from the materials issuing from such an opening.
Origin of volcanoItalian, from Spanish volcán or Portuguese volcão, both probably from Latin volc&amacron;nus, vulc&amacron;nus, fire, flames, from Volc&amacron;nus, Vulcan.
cutaway of an erupting volcano
(plural volcanos or volcanoes) | http://www.yourdictionary.com/volcano |
4.1875 | Warbler finch (Certhidea olivacea)
|Weight||8 g (2)|
Classified as Least Concern (LC) on the IUCN Red List (1).
With its remarkably warbler-like appearance and behaviour (3), it is not surprising that, during his famous visit to the Galapagos, Charles Darwin erroneously classified this species (2) (4). It was only following investigations of Darwin’s specimen collection by John Gould, that the warbler finch was discovered to be one of thirteen species of finch endemic to the Galapagos, which would later become known as Darwin’s finches. Each of Darwin’s finches has evolved a distinct beak shape in order to exploit different food sources (2). The warbler finch possesses a thin, probing bill, finer than that of the other species, which is ideal for feeding on small insects (5). The plumage of the warbler finch is unremarkable, with uniform dull, olive-grey feathers found in both sexes (4).
Although recent studies indicate that there are in fact two separate species of warbler finch, the green warbler finch (Certhidea olivacea) and the grey warbler finch (Certhidea fusca), they are assessed together as a single species, Certhidea olivacea,on the 2008 IUCN Red List. Despite having little difference in appearance, they are genetically distinct and occupy different islands in the Galapagos (6).
The most widespread of all Darwin’s finches, the warbler finch is found on every major island in the Galapagos. The green warbler finch mainly occupies larger, inner islands of the archipelago, while the grey warbler finch inhabits the smaller, outer islands (6).
The green warbler finch is only found in the Scalesia Zone, a lush, humid evergreen forest dominated by the daisy tree (Scalesia pedunculata), which occurs between elevations of 300 and 700 metres (5) (6). In contrast, the grey warbler finch inhabits the arid zone, where the vegetation comprises scattered deciduous trees, shrubs and cacti (5).
Although many Darwin’s finch species are insectivorous, only the warbler finch appears to be capable of taking prey on the wing. In addition, this species will use its thin, pointed bill to probe amongst moss, bark and leaves for spiders and insects (5).
Darwin’s finches usually breed during the hot, wet season when food is most abundant. Monogamous, lifelong breeding pairs are common, although mate changes and breeding with more than one partner have also been observed. Breeding pairs maintain small territories, in which they construct a small dome-shaped nest with an entrance hole in the side. Generally a clutch of three eggs is laid, which are incubated by the female for about twelve days, and the young brooded for a further two weeks before leaving the nest. The short-eared owl (Asio flammeus), frequently preys on the nestlings and juvenile Darwin’s finches, while adults are occasionally taken by Galapagos hawks (Buteo galapagoensis) and Lava herons (Butorides sundevalli) (2).
Increasing levels of human activity on the Galapagos are causing significant threats to the islands’ native wildlife. Darwin’s finches, in particular, are vulnerable to habitat destruction, invasion by non-native competitors and predators, and the introduction of diseases such as Avian pox (7). Despite these threats, the warbler finch is not currently considered to be threatened, as its population is large, and not undergoing a major decline (8).
The majority of the Galapagos archipelago forms part of the Galapagos National Park, a World Heritage Site. A management plan is in place for the islands, and the Ecuadorian government and non-governmental organisations are working to conserve its unique biodiversity (9). More specifically, scientists at the Charles Darwin Research Station are working to improve our understanding of Darwin's finches to ensure their conservation. This includes monitoring of populations and investigating introduced diseases (7).
To learn more about the conservation of Darwin’s finches visit:
- Charles Darwin Foundation:
For more information on this and other bird species please see:
- BirdLife International:
This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact: [email protected]
- Endemic: a species or taxonomic group that is only found in one particular country or geographic area.
- Monogamous: having only one mate during a breeding season, or throughout the breeding life of a pair.
IUCN Red List (November, 2011) | http://www.arkive.org/warbler-finch/certhidea-olivacea/factsheet |
4.0625 | 1 Answer | Add Yours
When creating and running an experiment it is important that the scientific and research methods are followed. The first step is to form a research question that you can test. The second step it to form a hypothesis. A hypothesis is what you propose the answer to the question will be. Next you will want to design a way to test your hypothesis. This typically includes choosing and identifying the variables that you will test to prove or disprove the hypothesis. These are called the dependent and independent variables. Next you will run the experiment. Finally you will report the results of the experiment. This will include running any statistical analysis, defining who the subjects and research methods were as well as any weaknesses that your research design may have. You will include the results of your experiment and if your hypothesis was proven or disproved here.
We’ve answered 302,225 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/which-does-scientist-do-well-planned-controlled-436171 |
4.0625 | * The bloods of different people have different antigenic and immune
properties, so that antibodies in the plasma of one blood will react with
antigens on the surfaces of the red cells of another blood type
* Antigen (Agglutinogen) – Red cell membrane
* Antibody (Agglutinin) – Plasma
* Types :
ABO Blood group
Rhesus (Rh) Blood group
* Karl Landsteiner’s law :
* If an antigen is present in the RBC’s of an individual, the corresponding
antibody must be absent from the plasma
* If an antigen is absent in the RBC’s of an individual, the corresponding
antibody must be present from the plasma
Anti – B (β)
Anti – A (α)
A and B
Anti – A and Anti - B
The ABO gene locus is located on the chromosome 9
* Immediately after birth, the quantity of agglutinins in the plasma is
almost zero. Two to 8 months after birth, an infant begins to produce
* Anti-A agglutinins when type A agglutinogens are not present in the cells,
and anti-B agglutinins when type B agglutinogens are not in the cells.
* A maximum titer is usually reached at 8 to 10 years of age, and this
gradually declines throughout the remaining years of life.
* But why are these agglutinins produced in people who do not have the
respective agglutinogens in their red blood cells ?
* Small amounts of type A and B antigens enter the body in food, in
bacteria, and in other ways, and these substances initiate the
development of the anti-A and anti-B agglutinins.
* Agglutinogens A & B 1st appear in the 6th week of fetal life. (1/5 →
puberty → adolescence.
* When bloods are mismatched so that anti-A or anti-B plasma agglutinins
are mixed with red blood cells that contain A or B agglutinogens,
respectively, the red cells agglutinate as a result of the agglutinins’
attaching themselves to the red blood cells.
* Because the agglutinins have two binding sites (IgG type) or 10 binding
sites (IgM type), a single agglutinin can attach to two or more red blood
cells at the same time, thereby causing the cells to be bound together by
the agglutinin. This causes the cells to clump, which is the process of
* These clumps plug small blood vessels throughout the circulatory
* During ensuing hours to days, either physical distortion of the cells or
attack by phagocytic white blood cells destroys the membranes of the
agglutinated cells, releasing hemoglobin into the plasma, which is called
“hemolysis” of the red blood cells.
The red blood cells are first separated from the plasma and diluted with saline.
One portion is then mixed with anti-A agglutinin and another portion with anti-B
After several minutes, the mixtures are observed under a microscope.
If the red blood cells have become clumped—that is, “agglutinated”—one knows
that an antibody antigen reaction has resulted.
Anti – A Sera (Blue)
Anti – B Sera (Yellow)
* The Rh system is the second most significant blood-group system in
human-blood transfusion. The most significant Rh antigen is the D
antigen, because it is the most likely to provoke an immune system
* Anti-D antibodies are not usually produced by sensitization against
* D-negative individuals can produce IgG anti-D antibodies following a
* A fetomaternal transfusion of blood from a fetus in pregnancy or
occasionally a blood transfusion with D positive RBCs.
* When red blood cells containing Rh factor are injected into a person
whose blood does not contain the Rh factor—that is, into an Rh-negative
person—anti-Rh agglutinins develop slowly, reaching maximum
concentration of agglutinins about 2 to 4 months later.
* % proportion
* If an Rh negative person has never before been exposed to Rh positive
blood, transfusion of Rh-positive blood into that person will likely cause
no immediate reaction.
* However, anti-Rh antibodies can develop in sufficient quantities during
the next 2 to 4 weeks to cause agglutination of those transfused cells that
are still circulating in the blood.
* These cells are then hemolyzed by the tissue macrophage system. Thus, a
delayed transfusion reaction occurs, although it is usually mild.
* On subsequent transfusion of Rh-positive blood into the same person,
who is now already immunized against the Rh factor, the transfusion
reaction is greatly enhanced and can be immediate and as severe as a
transfusion reaction caused by mismatched type A or B blood.
* Erythroblastosis fetalis is a disease of the fetus and newborn child
characterized by agglutination and phagocytosis of the fetus’s red blood
* In most instances of erythroblastosis fetalis, the mother is Rh negative
and the father Rh positive. The baby has inherited the Rh-positive
antigen from the father.
* The mother develops anti-Rh agglutinins from exposure to the fetus’s Rh
* Mother’s agglutinins diffuse through the placenta into the fetus and
cause red blood cell agglutination.
* 1st delivery – no harm
* The incidence rises progressively with subsequent pregnancies.
*Rapid production – early form of RBC – nucleated blastic forms
*Anemic, sometimes severe
*Agglutination – hemolysis – hemoglobin – bilirubin – jaundice
*Hepatomegaly, splenomegaly (Icterus gravis neonatorum)
*Hydrops fetalis – edema – cardiac failure – intrauterine death
* To replace the neonate’s blood with Rh-negative blood. Rh-positive blood
is being removed – exchange transfusion
* Rh immunoglobulin globin, an anti-D antibody is administered to the
expectant mother starting at 28 to 30 weeks of gestation.
* The anti-D antibody is also administered to Rh-negative women who
deliver Rh-positive babies to prevent sensitization of the mothers to the
* This greatly reduces the risk of developing large amounts of D antibodies
during the second pregnancy.
* MOA : to inhibit antigen-induced B lymphocyte antibody production in
the expectant mother. It also attaches to D antigen sites on Rh-positive
fetal red blood cells that may cross the placenta and enter the circulation
of the expectant mother.
Can’t cross the placenta
Cross the placenta
No natural antibody
Tissues, body fluids +
Integral membrane proteins | http://www.slideshare.net/drchintansinh/blood-groups-27988778 |
4.125 | A pit latrine or pit toilet is a type of toilet that collects human feces in a hole in the ground. They use either no water or one to three liters per flush with pour-flush pit latrines. When properly built and maintained they can decrease the spread of disease by reducing the amount of human feces in the environment from open defecation. This decreases the transfer of pathogens between feces and food by flies. These pathogens are major causes of infectious diarrhea and intestinal worm infections. Infectious diarrhea resulted in about 0.7 million deaths in children under five years old in 2011 and 250 million lost school days. Pit latrines are the lowest cost method of separating feces from people.
A pit latrine generally consists of three major parts: a hole in the ground, a slab or floor with a small hole, and a shelter. The shelter is often known as an outhouse. The pit is typically at least 3 meters (10 feet) deep and 1 m (3.2 feet) across. The World Health Organization recommends they be built a reasonable distance from the house balancing issues of easy access versus that of smell. The distance from groundwater and surface water should be as large as possible to decrease the risk of groundwater pollution. The hole in the slab should not be larger than 25 centimeters (9.8 inches) to prevent children falling in. Light should be prevented from entering the pit to reduce access by flies. This may require the use of a lid to cover the hole in the floor when not in use. When the pit fills to within 0.5 meters (1.6 feet) of the top, it should be either emptied or a new pit constructed and the shelter moved or re-built at the new location. The management of the fecal sludge removed from the pit is complicated. There are both environment and health risks if not done properly.
A basic pit latrine can be improved in a number of ways. One includes adding a ventilation pipe from the pit to above the structure. This improves airflow and decreases the smell of the toilet. It also can reduce flies when the top of the pipe is covered with mesh (usually made out of fiberglass). In these types of toilets a lid need not be used to cover the hole in the floor. Other possible improvements include a floor constructed so fluid drains into the hole and a reinforcement of the upper part of the pit with bricks, blocks, or cement rings to improve stability.
As of 2013 pit latrines are used by an estimated 1.77 billion people. This is mostly in the developing world as well as in rural and wilderness areas. In 2011 about 2.5 billion people did not have access to a proper toilet and one billion resort to open defecation in their surroundings. Southern Asia and Sub-Saharan Africa have the poorest access to toilets. In developing countries the cost of a simple pit toilet is typically between 25 and 60 USD. Ongoing maintenance costs are between 1.5 and 4 USD per person per year which are often not taken into consideration. In some parts of rural India the "No Toilet, No Bride" campaign has been used to promote toilets by encouraging women to refuse to marry a man who does not own a toilet.
- 1 Definitions
- 2 Design considerations
- 3 Types
- 4 Maintenance
- 5 Advantages
- 6 Disadvantages
- 7 Costs
- 8 Society and culture
- 9 See also
- 10 References
- 11 External links
Pit latrines are sometimes also referred to as "dry toilets" but this is not recommended because a "dry toilet" is an overarching term used for several types of toilets and strictly speaking only refers to the user interface. Depending on the region, the term "pit latrine" may be used to denote a toilet that has a squatting pan with a water seal or siphon (more accurately termed a pour-flush pit latrine - very common in South East Asia for example) or simply a hole in the ground without a water seal (also called a simple pit latrine) - the common type in most countries in sub-Saharan Africa. Whilst a dry toilet can be with or without urine diversion, a pit latrine is almost always without urine diversion. The key characteristic of a pit latrine is the use of a pit, which infiltrates liquids into the ground and acts as a device for storage and very limited treatment.
Improved or unimproved sanitation
A pit latrine may or may not count towards the Millennium Development Goals (MDG) target of increasing access to sanitation for the world's population, depending on the type of pit latrine: A pit latrine without a slab is regarded as unimproved sanitation and does not count towards the target. A pit latrine with a slab, a ventilated improved pit latrine and a pour flush pit latrine connected to a pit or septic tank are counted as being "improved sanitation" facilities as they are more likely to hygienically separate human excreta from human contact.
Size of the drop hole
The user positions themself over the small drop hole during use. The size of the feces drop hole in the floor or slab should not be larger than 25 centimeters (9.8 inches) to prevent children falling in. Light should be prevented from entering the pit to reduce access by flies.This requires the use of a lid to cover the hole in the floor when not in use. However, in practice, such a lid is not commonly used as it is easy to lose it or for the lid to get very filthy.
Squatting pan or toilet seat
On top of the drop hole there can either be nothing (this is the simplest form of a pit latrine) or there can be a squatting pan, seat (pedestal) or bench which can be made of concrete, ceramic, plastic or wood.
A shelter, shed, small building or "super-structure" houses the squatting pan or toilet seat and provides privacy and protection from the weather for the user. Ideally, the shelter or small building should have handwashing facilities available inside or on the outside (e.g. supplied with water from a rainwater harvesting tank on the roof of the shelter) although this is unfortunately rarely the case in practice. In the shelter, anal cleansing materials (e.g. toilet paper) and a solid waste bin should also be available. A more substantial structure may also be built, commonly known as an outhouse.
Locating the pit
Liquids leach from the pit and pass the unsaturated soil zone (which is not completely filled with water). Subsequently, these liquids from the pit enter the groundwater where they may lead to groundwater pollution. This is a problem if a nearby water well is used to supply groundwater for drinking water purposes. During the passage in the soil, pathogens can die off or be adsorbed significantly, mostly depending on the travel time between the pit and the well. Most, but not all pathogens die within 50 days of travel through the subsurface.
The degree of pathogen removal strongly varies with soil type, aquifer type, distance and other environmental factors. For this reason, it is difficult to estimate the safe distance between a pit and a water source - a problem that also applies to septic tanks. Detailed guidelines have been developed to estimate safe distances to protect groundwater sources from pollution from on-site sanitation. However, these are mostly ignored by those building pit latrines. In addition to that, household plots are of a limited size and therefore pit latrines are often built much closer to groundwater wells than what can be regarded as safe This results in groundwater pollution and household members falling sick when using this groundwater as a source of drinking water.
As a very general guideline it is recommended that the bottom of the pit should be at least 2 m above groundwater level, and a minimum horizontal distance of 30 m between a pit and a water source is normally recommended to limit exposure to microbial contamination.However, no general statement should be made regarding the minimum lateral separation distances required to prevent contamination of a well from a pit latrine. For example, even 50 m lateral separation distance might not be sufficient in a strongly karstified system with a downgradient supply well or spring, while 10 m lateral separation distance is completely sufficient if there is a well developed clay cover layer and the annular space of the groundwater well is well sealed.
If the local hydrogeological conditions (which can vary within a space of a few square kilometres) are ignored, pit latrines can cause significant public health risks via contaminated groundwater. In addition to the issue of pathogens, there is also the issue of nitrate pollution in groundwater from pit latrines. Elevated nitrate levels in drinking water from private wells is thought to have causes cases of blue baby syndrome in children in rural areas of Romania and Bulgaria in Eastern Europe.
A "partially lined" pit latrine is one where the upper part of the hole in the ground is lined. Pit lining materials can include brick, rot-resistant timber, concrete, stones, or mortar plastered onto the soil. This partial lining is recommended for those pit latrine used by a great number of people — such as a public restroom in rural areas, or in a woodland park or busy lay-by, rest stop or other similarly busy location — or where the soils are unstable in order to increase permanence and allow emptying of the pit without it collapsing easily. The bottom of the pit should remain unlined to allow for the infiltration of liquids out of the pit.
In Dar es Salaam, Tanzania pit latrines costing up to $300 are 10 ft (3.0 m) deep and lined with concrete slabs, while cheaper “temporary toilets” consist of a pits lined with two stacked oil drums or a stack of tires. These latrines, which are often used by several households, may be emptied by vacuum truck, manual digging, or overflowing into streets during rains.
A fully lined pit latrine has concrete lining also at the base so that no liquids infiltrate into the ground. One could argue that this is no longer a "pit" latrine in the stricter sense. The advantage is that no groundwater contamination can occur. The major disadvantage is that a fully lined pit latrine fills up very fast (as the urine cannot escape the pit) which results in high costs to empty and maintain the latrine. Increased odour can also be an issue as the pit content is much wetter and emits more odour. This type of pit latrine is used only in special circumstances, e.g. in denser settlements where groundwater protection is paramount.
Pit latrines are often built in developing countries even in situations where they are not recommended. These include (adapted from ):
- Frequent flooding, resulting in inoperable toilet systems and the contamination of water resources;
- Unfavourable soil conditions, such as unstable or rocky soil and high water table, making pit-based sanitation difficult and expensive;
- When groundwater is the primary source of drinking water and is likely to be contaminated by pit-based sanitation (for example in denser settlements or with unfavourable hydrogeological conditions);
- Limited land space restricts the excavation of new pits if full pit latrines are usually not emptied;
- Indoor installations are preferred as they provide greater comfort and security at night thus making them more accessible for all
Pit latrines collects human feces in a hole in the ground. The principle of a pit latrine is that all liquids that enter the pit—in particular urine and water used for anal cleansing—seep into the ground (the only exception are fully lined pit latrines, see below).
Well maintained pit latrine at a rural household near Maseru, Lesotho.
School children in Zimbabwe digging a shallow pit for an Arborloo toilet (a variation of a pit latrine), Epworth in Harare, Zimbabwe.
Traditional pit latrine in North Kamenya, Kenya.
This display shows children what toilets in rural areas in Germany used to look like in the recent past.
Abandoned pit latrine in the peri-urban area of Durban, South Africa.
Interior of an outhouse the structure usually built over the pit to provide privacy.
Ventilated improved pit
The ventilated improved pit latrine (VIP), is a pit latrine with a black pipe (vent pipe) fitted to the pit, and a screen (flyscreen) at the top outlet of the pipe. VIP latrines are an improvement to overcome the disadvantages of simple pit latrines, i.e. fly and mosquito nuisance and unpleasant odors. The smell is carried upwards by the chimney effect and flies are prevented from leaving the pit and spreading disease.
The principal mechanism of ventilation in VIP latrines is the action of wind blowing across the top of the vent pipe. The wind creates a strong circulation of air through the superstructure, down through the squat hole, across the pit and up and out of the vent pipe. Unpleasant fecal odors from the pit contents are thus sucked up and exhausted out of vent pipe, leaving the superstructure odor-free. In some cases solar-powered fans are added giving a constant outwards flow from the vent pipe.
Flies, searching for an egg-laying site are attracted by fecal odors coming from the vent pipe, but they are prevented from entering by the flyscreen at the outlet of the vent pipe. Some flies may enter into the pit via the squat hole and lay their eggs there. When new adult flies emerge they instinctively fly towards light. However, if the latrine is dark inside the only light they can see is at the top of the vent pipe. Since the vent pipe is provided with a fly screen at the top, flies will not be able to escape and eventually they will die and fall back into the pit.
To ensure that there is a flow of air through the latrine there must be adequate ventilation of the superstructure. This is usually achieved by leaving openings above and below the door, or by constructing a spiral wall without a door.
Covering the feces with an absorbent decreases smell and discourages flies. These may include soil, sawdust, ash or lime among others. In developing countries, the use of absorbents in pit toilets is not commonly practiced.
Twin pit designs
A further possible improvement is the use of a second pit which is used in alternation with the first pit. It means that the first pit can rest for the duration of time it takes to fill up the second pit. When the second pit is also full, then the first pit is emptied. The fecal sludge collected in that first pit has in the meantime undergone some degree of pathogen reduction although this is unlikely to be complete. This is a common design for so-called twin-pit pour flush toilets and increases the safety for those having to enter the pit. Also VIPs are sometimes built with two pits, although for VIP toilets one problem can be that the users may not stick to this alternation method and fill up both pits at the same time.
Pour-flush pit latrine
In a pour–flush pit latrine, a squatting toilet with a water seal (U-trap or siphon) is used over one or two offset pits instead of a plain hole or seat. Therefore, these types of toilets do required water for flushing but otherwise have many of the same characteristics as simple pit latrines and are for this reason subsumed under the term "pit latrine". The fecal sludge that is removed from the full pits of twin-pit pour-pour flush pit latrines is somewhat safer to handle and reuse than the fecal sludge from single pit pour-flush latrines, although significant health risks remain in either case and are a cause for concern.
A cat hole is a one-time use pit toilet often utilized by campers, hikers and other outdoor recreationalists. It is also called the "cat method" and simply means digging a little hole just large enough for the feces of one defecation event which is afterwards covered with soil.
The requirements for safe pit emptying and fecal sludge management are often forgotten by those building pit latrines, as the pit will "only" fill up in a few years time. However, in many developing countries safe fecal sludge management practices are lacking and causing public health risks as well as environmental pollution. Fecal sludge that has been removed from pits manually or with vacuum tankers is often dumped into the environment indiscriminately, leading to what has been called "institutionalized open defecation".
When the pit of a pit latrine is full, the pit latrine stops being usable. The time it takes to fill the pit depends on the volume of the pit, the number of users but also on the soil permeability and groundwater level. It can typically take between one to ten years or even longer in some exceptional cases to fill up the pit of a pit latrine. At that point, the pit latrine is either covered and abandoned, and a new pit latrine built if space on the property permits this. The new pit latrine may reuse the shelter (super-structure) of the previous pit latrine in some cases. For pit latrines in more densely populated areas or at schools, the full pits are more likely to be emptied so that the toilets can be continued to be used at the same location. The emptying can be done manually with shovels and buckets, with manually powered pumps or with motorized pumps mounted on a vacuum truck which carries a tank for storage. For the fecal sludge to be pumpable, water usually needs to be added to the pit and the content stirred up, which is messy and smelly.
The fecal sludge may be transported by road to a sewage treatment facility, or to be composted elsewhere. There are numerous licensed waste hauling companies providing such service in areas where it is needed in developed countries, although in developing countries such services are not well regulated and are often carried out by untrained, unskilled and unprotected informal workers.
When managed and treated correctly to achieve a high degree of pathogen kill, fecal sludge from pit latrines could be used as a fertilizer due to its high nitrogen, phosphorus and organic matter content. However, it is hard to ensure that this is done in a safe manner. The number of viable helminth eggs is commonly used as an indicator organism to make a statement about the pathogen load in a fecal sludge sample. Helminth eggs are very persistent to most treatment methods and are therefore a good indicator.
A range of commercial products are available which claim to help reduce the volume of feces in a latrine, and reduce odor and fly problems. They are collectively described as a pit additive and many of them are based on the concept of effective microorganisms. The intention is to add specific strains of microbes to aid the decomposition process - but their effectiveness is disputed and recent research found no effect in scientific test conditions.
Wood ash or sawdust can also be added on top of the feces to decrease the smell. However, this is rarely done for pit latrines (more commonly done for dry toilets) as the users find that too much hassle and generally do not expect a pit latrine to be odour free and rather put up with some smell. In the case of Arborloos it is recommended to add some leaves, soil or compost into the pit after defecation.
Advantages of pit latrines may include:
- Can be built and repaired with locally available materials
- Low (but variable) capital costs depending on materials and pit depth
- Small land area required
Measures to improve access to safe water, sanitation and better hygiene, which includes the use of pit latrines instead of open defecation, is believed to be able to prevent nearly 90% of deaths due to infectious diarrhea.
Disadvantages of pit latrines may include:
- Flies and odours are normally noticeable to the users
- The toilet has to be outdoors with the associated security risks if the person is living in an insecure situation
- Low reduction in organic matter content and pathogens
- Possible contamination of groundwater with pathogens and nitrate
- Costs to empty the pits may be significant compared to capital costs
- Pit emptying is often done in a very unsafe manner
- Sludge (called fecal sludge) requires further treatment and/or appropriate discharge
- Pit latrines are often relocated or re-built after some years (when the pit is full and if the pit is not emptied) and thus need more space than urine-diverting dry toilets for example and people are less willing to invest in a nice high-quality super-structure as it will have to be dismantled at some point.
In developing countries the construction cost for a simple pit toilet is between about US$25 and 60. This cost figure has a wide range because the costs vary a lot depending on the type of soil, the depth and reinforcement of the pit, the superstructure that the user is willing to pay for, the type of toilet squatting pan or toilet seat chosen, the cost of labour, construction materials (in particular the cost of cement can differ a lot from one country to the next), the ventilation system and so forth.
Rather than looking only at the construction cost, the whole of life cost (or life-cycle cost) should be considered, as the regular emptying or re-building of pit latrines may add a significant expense to the households in the longer term.
Society and culture
Pit latrines may or may not be an enjoyable experience to use. Problems may occur when the pit latrine is shared by too many people, is not cleaned daily and not emptied when the pit is full. In such cases, flies and odour can be a massive nuisance. Also, pit latrines are usually dark places which are difficult to keep clean. Often, handwashing facilities are missing. For these reasons, shared pit latrines can be quite uncomfortable to use in developing countries. Also, there might be cultural preferences for open defecation and these may be difficult to overcome with unattractive toilet designs. This is currently being discussed amongst experts for the example in the case of rural India where behaviour change campaigns are needed to reduce open defecation.
In 2011 about 2.5 billion people did not have access to a proper toilet and one billion defecate outside. Southern Asia and Sub-Saharan Africa have the poorest access to toilets. Pit latrines are often promoted by government agencies and NGOs in rural areas as a low-cost quick fix solution (even in areas where other types of toiles, such as dry toilets, might be the better solution for example due to high groundwater table). In the rural part of Haryana state in India the "No Toilet, No Bride" or "No loo, no "I do"" slogans has been used to promote toilets (usually pour flush pit latrine toilets) by encouraging women to refuse to marry a man who does not own a toilet.
The community-led total sanitation campaigns which have been successful in many developing countries usually also result in the construction of pit latrines (typically with pour flush in Asia, without pour flush in sub-Saharan Africa) as a first step to get away from open defecation.
- WEDC. Latrine slabs: an engineer’s guide, WEDC Guide 005 (PDF). Water, Engineering and Development Centre The John Pickford Building School of Civil and Building Engineering Loughborough University. p. 22. ISBN 978 1 84380 143 6.
- Tilley, E., Ulrich, L., Lüthi, C., Reymond, Ph. and Zurbrügg, C. (2014). Compendium of Sanitation Systems and Technologies (2 ed.). Dübendorf, Switzerland: Swiss Federal Institute of Aquatic Science and Technology (Eawag). ISBN 9783906484570.
- "Simple pit latrine (fact sheet 3.4)". who.int. 1996. Retrieved 15 August 2014.
- "Call to action on sanitation" (PDF). United Nations. Retrieved 15 August 2014.
- Walker, CL; Rudan, I; Liu, L; Nair, H; Theodoratou, E; Bhutta, ZA; O'Brien, KL; Campbell, H; Black, RE (Apr 20, 2013). "Global burden of childhood pneumonia and diarrhoea.". Lancet 381 (9875): 1405–16. doi:10.1016/s0140-6736(13)60222-6. PMID 23582727.
- François Brikké (2003). Linking technology choice with operation and maintenance in the context of community water supply and sanitation (PDF). World Health Organization. p. 108. ISBN 9241562153.
- Graham, JP; Polizzotto, ML (May 2013). "Pit latrines and their impacts on groundwater quality: a systematic review.". Environmental health perspectives 121 (5): 521–30. doi:10.1289/ehp.1206028. PMID 23518813.
- Progress on sanitation and drinking-water - 2014 update. (PDF). WHO. 2014. pp. 16–20. ISBN 9789241507240.
- Selendy, Janine M. H. (2011). Water and sanitation-related diseases and the environment challenges, interventions, and preventive measures. Hoboken, N.J.: Wiley-Blackwell. p. 25. ISBN 9781118148600.
- Sanitation and Hygiene in Africa Where Do We Stand?. Intl Water Assn. 2013. p. 161. ISBN 9781780405414.
- Global Problems, Smart Solutions: Costs and Benefits. Cambridge University Press. 2013. p. 623. ISBN 9781107435247.
- Stopnitzky, Yaniv (12 December 2011). "Haryana's scarce women tell potential suitors: "No loo, no I do"". Development Impact. Blog of World Bank. Retrieved 17 November 2014.
- WHO and UNICEF definitions of improved drinking-water source on the JMP website, WHO, Geneva and UNICEF, New York, accessed on December 16, 2015
- DVGW (2006) Guidelines on drinking water protection areas - Part 1: Groundwater protection areas. Bonn, Deutsche Vereinigung des Gas- und Wasserfaches e.V. Technical rule number W101:2006-06
- Nick, A., Foppen, J. W., Kulabako, R., Lo, D., Samwel, M., Wagner, F., Wolf, L. (2012). Sustainable sanitation and groundwater protection - Factsheet of Working Group 11. Sustainable Sanitation Alliance (SuSanA)
- ARGOSS (2001). Guidelines for assessing the risk to groundwater from on-site sanitation. NERC, British Geological Survey Commissioned Report, CR/01/142, UK
- Moore, C., Nokes, C., Loe, B., Close, M., Pang, L., Smith, V., Osbaldiston, S. (2010) Guidelines for separation distances based on virus transport between on-site domestic wastewater systems and wells, Porirua, New Zealand, p. 296
- Wolf, L., Nick, A., Cronin, A. (2015). How to keep your groundwater drinkable: Safer siting of sanitation systems - Working Group 11 Publication. Sustainable Sanitation Alliance
- Buitenkamp, M., Richert Stintzing, A. (2008). Europe's sanitation problem - 20 million Europeans need access to safe and affordable sanitation. Women in Europe for a Common Future (WECF), The Netherlands
- George, Rose (7 July 2009). The Big Necessity: The Unmentionable World of Human Waste and Why It Matters. Henry Holt and Company. pp. 83–85. ISBN 978-1-4299-2548-8.
- Rieck, C., von Münch, E., Hoffmann, H. (2012). Technology review of urine-diverting dry toilets (UDDTs) - Overview on design, management, maintenance and costs. Deutsche Gesellschaft fuer Internationale Zusammenarbeit (GIZ) GmbH, Eschborn, Germany
- Ahmed,M.F. & Rahman,M.M. (2003). Water Supply & Sanitation: Rural and Low Income Urban Communities, 2nd Edition, ITN-Bangladesh. ISBN 984-31-0936-8.
- Still, David; Foxon, Kitty (2012). Tackling the challenges of full pit latrines : report to the Water Research Commission. Gezina [South Africa]: Water Research Commission. ISBN 9781431202935.
- Bakare, BF; Brouckaert, CJ; Foxon, KM; Buckley, CA (2015). "An investigation of the effect of pit latrine additives on VIP latrine sludge content under laboratory and field trials". Water SA 41 (4): 509. doi:10.4314/wsa.v41i4.10. ISSN 0378-4738.
- Foxon, K., Still, D. (2012). Do pit additives work? Water Research Commission (WRC), University of KwaZulu-Natal, Partners in Development (PiD), South Africa
- WHO, UNICEF (2009). Diarrhoea : why children are still dying and what can be done (PDF). New York: United Nations Children's Fund. p. 2. ISBN 978-92-806-4462-3.
- McIntyre, P., Casella D., Fonseca, C. and Burr, P. Priceless! Uncovering the real costs of water and sanitation (PDF). The Hague: IRC. ISBN 978-90-6687-082-6.
- Clasen, Thomas; Boisson, Sophie; Routray, Parimita; Torondel, Belen; Bell, Melissa; Cumming, Oliver; Ensink, Jeroen; Freeman, Matthew; Jenkins, Marion; Odagiri, Mitsunori; Ray, Subhajyoti; Sinha, Antara; Suar, Mrutyunjay; Schmidt, Wolf-Peter (2014). "Effectiveness of a rural sanitation programme on diarrhoea, soil-transmitted helminth infection, and child malnutrition in Odisha, India: a cluster-randomised trial". The Lancet Global Health 2 (11): e645. doi:10.1016/S2214-109X(14)70307-9. PMID 25442689.
- "Sanitation" (PDF). United Nations. 2013. Retrieved 15 August 2014.
Find more about
at Wikipedia's sister projects
|Definitions from Wiktionary|
|Media from Commons|
|News stories from Wikinews|
|Quotations from Wikiquote|
|Source texts from Wikisource|
|Textbooks from Wikibooks|
|Learning resources from Wikiversity|
|Data from Wikidata|
- Single pit latrine on eCompendium website, the online version of the Eawag-Sandec Compendium
- WEDC knowledge database filtered for WEDC guide and latrine (WEDC, Loughborough University, UK)
- Photos of pit latrines: Search for "pit latrine" in the Sustainable Sanitation Alliance photo database on flickr
- Documents on groundwater pollution from on-site sanitation in SuSanA library
- Storage and Treatments On-site storage and treatment technologies in Sustainable Sanitation and Water Management (SSWM) toolbox | https://en.wikipedia.org/wiki/Pit_toilet |
4.03125 | How Your Retina Works
The eye is one of the most amazing organs in the body. To understand how artificial vision is created, it's important to know about the important role that the retina plays in how you see. Here is a simple explanation of what happens when you look at an object:
The retina is complex in itself. This thin membrane at the back of the eye is a vital part of your ability to see. Its main function is to receive and transmit images to the brain. These are the three main types of cells in the eye that help perform this function:
- ganglion cells
There are about 125 million rods and cones within the retina that act as the eye's photoreceptors. Rods are the most numerous of the two photoreceptors, outnumbering cones 18 to 1. Rods are able to function in low light (they can detect a single photon) and can create black-and-white images without much light. When enough light is available, cones give us the ability to see color and detail of objects. Cones are responsible for allowing you to read this article, because they allow us to see at a high resolution.
Click the play button to see what happens when light strikes the eye.
If the above animation is not working, click here to download the Quicktime player.
The information received by the rods and cones are then transmitted to the nearly 1 million ganglion cells in the retina. These ganglion cells interpret the messages from the rods and cones and send the information on to the brain by way of the optic nerve.
There are a number of retinal diseases that attack these cells, which can lead to blindness. The most notable of these diseases are retinitis pigmentosa and age-related macular degeneration. Both of these diseases attack the retina, rendering the rods and cones inoperative, causing either loss of peripheral vision or total blindness. However, it's been found that neither of these retinal diseases affect the ganglion cells or the optic nerve. This means that if scientists can develop artificial cones and rods, information could still be sent to the brain for interpretation. | http://science.howstuffworks.com/innovation/everyday-innovations/artificial-vision1.htm |
4 | You Are Here
Activity 2: Story, The Talking Stick
Activity time: 10 minutes
Materials for Activity
- A copy of the story "The Talking Stick"
Preparation for Activity
- Read the story.
- Optional: Using instructions in Activity 3, Talking Sticks, make a sample talking stick. Set it aside to show the children at the end of the story.
Description of Activity
The story retells a Lakota Sioux legend. When the U.S. government began settling on the native peoples' homelands, the Sioux, including the Dakota, Lakota, and Nakota tribes, occupied the Great Plains, a part of North America that includes grasslands, hills, and streams but not a lot of forests. Summers for them were hot and the winter was long and cold. The Sioux culture centered on using horses to hunt buffalo for food.
Gather participants so that they can see and hear the leader telling the story.
Tell the children you will tell them a very old legend from the Sioux tribe of American Indians-a story that explains how the Sioux began using talking sticks in their tribal meetings. If you have a talking stick to use as an example, conceal it until the end of the story when the grandmother makes the first talking stick.
Read or tell the story. Once you have finished, lead a discussion with these questions:
- Whose idea was the talking stick?
- Who was the first human to make a talking stick?
- What was the eagle trying to do by suggesting talking sticks?
- What gave the eagle the idea for making talking sticks?
- Can you think of any other object the grandmother could have made to use the same way? | http://www.uua.org/re/tapestry/children/lovesurrounds/session10/170186.shtml |
4.03125 | Grade Five Music Theory - Lesson 13: Composing a Melody - Instruments
- Please note: I recommend you read Lesson 12: General Composition Tips before reading this lesson.
- Voice composition will be covered in detail in the next lesson.
- You may also be interested in our Video Course on Instrument Composition (preview the course below!)
The Grade Five Music Theory Instrumental Composition Question
In your Grade Five Music Theory exam, you’ll be given the first two bars of a melody, with the key and time signatures. (It could be in treble clef or bass clef.)
The instructions will ask you to choose from two instruments and to continue writing the melody for the instrument you’ve chosen.
(There is no wrong answer- choose the instrument you are most comfortable with).
The choice of instruments will be from different families, for example the violin and oboe, or the bassoon and cello.
You will never have to write for an instrument on which we usually play several notes at the same time, like the piano, harp or organ.
Here’s an example question:
Compose a melody for solo violin or oboe, using the given opening. Include tempo and other performance directions, and any that might be specially needed for the instrument you’ve chosen. The finished melody should be eight bars long.
It’s a good idea to choose an instrument that you know something about! If you play the clarinet and you have the choice of bassoon or cello, you’ll probably write a better melody for the bassoon, as it is also a wind instrument.
Whichever instrument you choose, you will need to know its range, i.e. what its lowest and highest notes are.
As long as you don’t start using lots of ledger lines you should stay within the range required, but don’t forget that some instruments are less/more effective in different registers. For example, although the flute can play from middle C, the very lowest notes are quite weak and much less bright than octave above.
Complete details about the ranges of all the standard orchestral instruments can be found in the reference section of this site.
Don’t forget to write which instrument you have chosen on your exam paper!
The melody should be 8 bars in total. You usually get around 2 bars to start you off, so you will have to write 6.
Notice whether the melody starts with a complete bar or not - if it starts with an incomplete bar, then your last bar should make up the beats, (so you’ll finish up with 7 complete bars and 2 incomplete bars).
Here, for example, the first note is an up beat. The last bar and the first bar added together make one complete bar.
You don't need to add bar numbers, but you might find it useful to do so.
Only the first and last bars can be "incomplete". You will probably need to use two staves to write your whole composition out, so make sure the first bar on the second stave is complete: don't split a bar across two staves.
Don’t forget to finish with a double barline!
Whatever instrument you’re writing for, you will need to include performance directions for the player.
You must include:
- Use the accepted Italian or German musical terms. You won't get extra marks for using an obscure term, so it's a good idea to play it safe and use a common term such as "Moderato".
- If you pick a very fast or very slow tempo, you might make the composition particularly awkward to play for the chosen instrument, so unless you are 100% sure, use "Moderato" or "Andante", which are both moderate tempos.
- You can use a metronome marking if you prefer, e.g. , but be sure to use a number which is actually found on a metronome (you couldn't use the number 59, for example!) Also, make sure you use the value of note which is indicated by the time signature. If the time signature is 4/4, you would use a crotchet (quarter note), because the time signature means "count crotchets". If the time signature is 6/8 though, you would have to use a dotted crotchet (dotted quarter note), because 6/8 is a compound time signature.
- The player needs to know what dynamic the piece begins at, so be sure to add a starting dynamic (e.g. "mf"), directly under the first note.
- You should also indicate some gradual increases/decreases of volume with hairpins e.g. . Make sure that the beginning and end of the hairpin is accurately placed under specific notes.
- Make sure that all the dynamics you write are logical. If you write "mp < pp" it is very confusing, since you have indicated a crescendo which gets quieter!
- It's a good idea to make the loudest part of the melody happen somewhere around bars 6-8, as this is where we expect to hear a musical climax.
- Adding the right articulation indications will increase the marks you get for this question.
- If there are no articulation markings, a wind player (woodwind and brass) has to attack each and every note with the tongue, and a string player has to change the bow direction with every note. This is not only tiring for the player, but it makes the music sound rather jagged and unlyrical.
- The legato marking (or "slur") is used to show that wind players should play a group of notes with one breath, or that string players should play a group of notes with one sweep of the bow. Be sure to use legato markings in your composition. You can slur all the notes in one bar, or half bars, or groups of faster notes, like quavers (eighth notes). Don't slur more than one bar though - a wind player is likely to run out of breath, and a string player will run out of bow! Slurs should be written on the opposite side of the note to the stem, but it's ok to write them on the other side if there isn't enough space. Here are some different ways you could slur the same melody, slurring the whole bar, the half bar, and the pairs of quick notes:
- You can also use other types of articulation, such as staccato, accents and tenuto, but these are optional.
- Whatever articulation you use, try to be consistent throughout the melody. For example, you slurred each half bar from bars 1-4, then you should do the same in bars 5-8. If you put staccato on the semiquavers (16th notes) in bar 2 (for example), then do the same for any similar notes/rhythms in the rest of the piece.
Tempo, dynamics are articulation are the only performance directions that you must add. There are some other markings which are optional:
Wind players will need somewhere to breathe. You may indicate places where the player can grab a quick intake of air by using a small comma - above the stave.
Although you don't have to put breathing marks in, you do have to make sure that your melody is playable by a human player! If you slurred all the notes across four bars and put a tempo of "molto adagio", you would end up with a dead flautist. Think about what you are writing!
It can be nice to add a pause symbol on the last note of your piece.
String parts sometimes include special symbols which tell the player whether to play a note with their bow moving upwards or downwards.
However, bowing directions are normally only used a) in beginner's study books to help them out or b) when the direction of the bow is not what the string player would expect.
What does a string player expect? A "down bow" is used on a "down beat". A "down beat" is another word for a strong beat; the first beat of a bar is always a "down beat". (It gets this name because a conductor moves his/her hand downwards to show the first beat of each bar). An "up beat" occurs before a down beat, and the player uses an "up bow". So, if your composition begins with an up beat (or "anacrusis" or "pick up"), then a competent string player would automatically play it with an "up bow" and then use a "down bow" for the first beat of bar 1, and so on.
For this reason, it's completely unnecessary to cover your 8-bar string melody with bowing directions. You won't get extra marks for them, and you might end up losing marks if they don't make logical sense.
If, on the other hand, you happen to be a advanced string player and are confident that you know how to use bowing directions for good effect, then by all means use them. Here are the symbols: you should learn them in any case, as you may be tested on them in other parts of the exam paper.
String instruments are capable of playing more than one note at the same time (this is called "double-stopping"). A whole range of sounds can be produced by striking the strings in different ways, such as "pizzicato" (plucking the strings with the fingers), "tremolo" (shimmering the bow rapidly up and down) or "spiccato" (using the wooden part of the bow instead of the hairy part).
Wind instruments can produce effects such as "flutter-tonguing".
Both wind and string instruments can play "vibrato", and brass/string instruments can play with a mute.
None of these special effects are necessary for your exam composition, and we would strongly recommend avoiding them. The examiner is not looking for fireworks: they are are looking for a balanced, well-constructed composition. You can read up on the ABRSM's marking criteria here. | http://www.mymusictheory.com/for-students/grade-5/59-13-composing-a-melody-for-instruments |
4.09375 | Let's talk about wavelength of a periodic wave. The wavelength is the word used to describe the physical size of one whole assault of a periodic wave.
So that means that I got to go from one place where the periodic wave is doing something to another place where the periodic wave is doing the same thing, alright? So we can think of it as the distance between two consecutive peaks or two consecutive troughs. One easy way to think about this is that if you're sitting on a boat in the ocean and you're at a peak in an ocean wave, and you look down to where the next peak is, the distance to that is the wavelength. So if I've got a periodic wave here, the wavelength is the distance between two peaks or the distance between two troughs. But that's not the only way that I can do it. I can also say well, jeez. Here the wave's doing something. Wave's doing the same thing there. So that has to be the wavelength also. Alright.
Now, notice that it's not just where the wave mis in the same place because this distance is certainly not a wavelength. So it's got to be the in the same place doing the same thing. And then you get a wavelength.
Now, what's this weird symbol I'm using for wavelength? This is a Greek letter lambda and we write a lambda like this. It's real real real real simple. Back slash, forward slash. There you go, lambda.
So let's talk about some problems that I've seen on tests asking about wavelength. So suppose that we're given a wave, a periodic wave like this and we've got some points labeled, alright? And we're asked to find the wavelength given a bunch of different situations. Alright. So let's look at the first one, a to e is 6 meters, a to e. Alright. So what we've got to do is figure out how many wavelengths appear between a and e, alright? When I start at a, and here I go.
Well, that's a whole wave because I started off at a and c is doing the same thing. So that's a whole wave and then I got another whole wave. So if this is two waves and the distance is 6, then that means two lambda must equal 6. So lambda is 3 meters. Does that make sense? Not difficult at all. Alright. How about a to c? a to c is a wavelength. So that tells us the answer immediately, 5 meters.
Alright, what about this next one. a to b is 4 meters. Well, a to b right here looks like half a wave length because the piece here and the piece here are the same just inverted. Alright? So that means that one half of the wavelength will be four and that means that the wavelength will be 8 meters.
Alright, now this one's the most difficult one, still not that bad, but it is the most difficult. a to d is 8 meters. Alright. a to d. How much of a wavelength is that? Well, I've got a whole wave here from a to c and then I've got another half of a wave from c to d. So that means that I've got one and half or three halves wavelength and that's going to be given by 8 meters. So that means that the wave length in this case is 16 thirds of a meter. Now that's one, probably the only of these four that you probably really need to write down, alright? That you actually would have to write it down. And that's why wanted to do it because these are always so simple that you can just look and guess the answer.
Alright. Now, I just want to say a couple of words about what wavelength means. We said over there that wavelength is the characteristic size of a wave. But what does that mean? Why do I care? Well, if I'm on a ship, I care very much. Because let's say that the ocean wave looks like this, okay? There's my ocean wave. Now, what if my ship is much much much smaller than a wavelength? So my ship is right here. Great. I don't care about that wave. the wave's doing essentially the same thing, the whole time underneath me. So I don't really care. I'll just go up and down with the wave and I'm fine. Right? Alright. What if my ship is much much much bigger than the wave length? well, jeez.
Now, the wave is kind of just averaging out underneath my ship. Alright? So again, it doesn't really matter that much. But what if the wave and the ship are about the same size? Well, now I'm going to have one part of my ship here and the other part here, and now my ship is going to break. So that's the idea.
Wavelength is the physical size of the wave, the characteristic size and whenever we're doing problems involving waves, we want to be very very very careful when we're looking at the interaction between a wave and some physical object whose size is around the same as wavelength of that wave. Alright? And there's wavelength. | https://www.brightstorm.com/science/physics/vibration-and-waves/wavelength/ |
4.03125 | 1 Answer | Add Yours
The scientific method is a logical way to solve a problem or question. One must first state the problem clearly. Then, one can do research and find out what is already known about this topic. Next, one can make a hypothesis. It is a statement that is a possible answer to the question that was posed. It should be stated as an if....then.. statement. For example, one could ask-- will fertilizer make a plant grow taller than just plain water? A possible hypothesis would be--if fertilizer is added, plant growth will increase. Next, the hypothesis must be tested in a controlled experiment. There must be a group that the independent variable is tested on--the experimental group and a group that does not get the variable--the control group. In this example, the experimental group receives fertilizer, and the control group does not. All other factors are kept constant--sun exposure, soil, amount of water and type of plants. Data must be collected during the experiment. It will then be analyzed and a conclusion is reached. Is the hypothesis correct or is it rejected? Scientists share the results with others and the experiment may be repeated many times to test its validity. The scientific method is a logical way to solve problems.
We’ve answered 301,022 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/dont-know-what-do-rpthe-scientific-method-460741 |
4.09375 | PROVIDENCE, R.I. [Brown University] Around 30 to 40 million years ago, grasses on Earth underwent an epic evolutionary upheaval. An assemblage capitalized on falling levels of atmospheric carbon dioxide by engineering an internal mechanism to concentrate the dwindling CO2 supply that, like a fuel-injection system in a car, could more efficiently convert sunlight and nutrients into energy.
The rise of C4 grasses is not disputed. They dominate in hot, tropical climes and now make up to 20 percent of our planet's vegetational covering. Scientists have pinned the rise of C4 plants primarily to dwindling concentrations of CO2. But C4 grasses have been closely linked with warmer temperatures. Indeed, on a map, C4 grasses are found along tropical gradients, while C3 grasses occupy the northern, or colder, end of the temperature gradient. Considering knowledge of their past and their current distribution, what was left to question?
Everything, apparently, according to Erika Edwards, an evolutionary biologist at Brown University. In a paper published online in the Proceedings of the National Academy of Sciences, Edwards and Stephen Smith, a postdoctoral researcher at the National Evolutionary Synthesis Center in North Carolina, have found that rainfall not temperature was the primary trigger for C4 grasses' evolutionary beginnings. Moreover, the pair say C4 grasses were already in tropical forests before moving out of the shade of the taller trees and into drier, sunnier environments.
"We've kind of changed the story a bit," said Edwards, assistant professor of biology.
The paper is important, Smith said, because it "demonstrates the importance of precipitation in the evolution of grasses and particularly in the evolution of C4 grasses specifically, their movement into drier, not necessarily warmer climates."
To arrive at their findings, the biologists compiled a database of roughly 1.1 million specimens of grasses collected by botanists worldwide. They mapped the locations for these species and then added global precipitation and temperature charts.
"By combining all these data," Edwards said, "we could get individual climate profiles for each grass species."
The pair then went a step further. They whittled the list to approximately 1,230 species for which the plants' genes had been sequenced and from there built a phylogenetic profile for the collection, the most comprehensive evolutionary tree to date for grasses. The reason for building the phylogeny, Edwards said, was to tease out the junctures at which C3 and C4 grasses diverged over time. The scientists zeroed in on 21 such "transition nodes" and examined the climatic conditions during those branching periods.
They found that in 18 of the 21 instances, precipitation, rather than temperature, had changed. "That was the clincher," Edwards said.
Looking more closely at the differences in rainfall, Edwards and Smith noticed the shifts in the amount of rainfall between C3 and C4 grasses in the tropics dictated in sharp relief how the different lineages evolved. Generally speaking, C3 grasses flourished in areas that received, on average, 1,800 millimeters (71 inches) of rain annually; C4 grasses took root in areas that received, on average, 1,200 millimeters (47 inches) of rain annually.
"Twelve-hundred millimeters isn't a desert," Edwards noted. "It's still a fairly mesic place. And so when you start looking at climate profiles, these closely related C3 and C4 lineages are straddling this transition zone between tropical forests and tropical woodlands and savanna."
So, did C4 grasses evolve in the tropical forest and then move out from the canopy or did they move out first and then adopt a different photosynthetic pathway? Edwards isn't sure, but she thinks the pathway may have begun to form with C3 grasses on the forest margins, where those plants would have been subjected to greater fluctuations in precipitation, sunlight, temperature and other environmental stresses, spurring the photosynthetic innovation.
What that all means for the future of C4 grasses and climate change is an open question. While the grasses would presumably benefit from projections of lower mean rainfall in some areas of the tropics, they may be less competitive with rising levels of atmospheric CO2. Also, the effects of changes in land through deforestation and other practices would need to be considered, Edwards said.
In a related finding, the scientists attempt to explain the dominance of a lineage of C3 grasses, called Pooideae, in northern, cold areas of the globe, such as the Mongolian steppes. "The global latitudinal gradients of C3 and C4 always has been explained by the physiological advantages that C4 grasses have under high temperatures," Edwards explained. "No one has considered that the evolution of cold tolerance might have been equally important in setting up that latitudinal gradient. Climatically speaking, the cool-climate Pooideae are really the grasses that are doing something very different."
"It highlights the apparently important role that cold tolerance has played for the evolution of non-C4 grasses and especially the group Pooideae, which includes rye, barley, and wheat and many of the other grasses in the temperate and boreal habitats," Smith said.
|Contact: Richard Lewis| | http://www.bio-medicine.org/biology-news-1/Brown-biologist-solves-mystery-of-tropical-grasses-origin-11755-1/ |
4.0625 | 2 Answers | Add Yours
The above answer is only partially correct. The Southern states had been agricultural since the founding of Jamestown in 1607 when the Virginia company awarded large tracts of land to settlers under the headright system. The climate and soil of the South lent itself to large scale agriculture which was first performed by indentured servants and later by slaves. Slavery was an economic necessity for the Southern economy to survive. The South DID NOT import food, in fact it fed most of the nation. As of 1860 the South encompassed only thirty per cent of land area of the U.S. and had only 39 per cent of the population, yet in that same year it's contribution to the national economy is striking. It furnished:
- 60 per cent of the U.S. production of swine.
- 52 per cent of the U.S. output of corn.
- half the nation's cattle.
- 29 percent of wheat production.
- 19 percent of oat and rye production.
- 52 per cent of oxen
- 90 per cent of the nation's mules.
Obviously, the largest cash crop was cotton which was shipped to European mills. The very nature of its economy meant there was no room for manufacturing, and as a result, most manufactured goods were purchased either from Europe or from the North. It was the issue of imports from Europe which had the South up in arms over the infamous Tariff of 1828.
The North was indeed industrialized, and also had been from its inception. There was neither the climate, nor soils to promote large scale agriculture; however there were abundant navigable streams which made shipping and later manufacturing feasible. It would be a mistake to assume that the North had more foresight than the South; it was rather the result of geographic and climatological differences that the two economies developed separately. Had the geography of the two areas been reversed, then slavery would have flourished in the North and industry in the South.
Socially, the planter elite did exist in the South, largely based on a conception of European polite society. Large planters normally held political office and exercised political clout. In the North, class difference was not evident at this point, although it became glaringly obvious with the advent of the Industrial Revolution and the Captains of Industry.
The major economic difference between the two sections of the country was that the North's economy was based on small farmers and on manufacturing while the South's was based on slavery and the plantation system. This meant that the North was more self-sufficient and less reliant on imports. The South, meanwhile, produced mainly staple crops which it had to export. It also, then, had to import manufactured goods and even food.
The main social difference stemmed from this economic difference. The South's society was an aristocratic one. It was dominated by a few plantation owners who saw themselves as an elite. The poorer whites (and especially the slaves) were beneath them and dependent upon them. By contrast, the North's society was more egalitarian. It was not fully equal, of course, but it put much less of an emphasis on class differences (the difference between gentlement and others, for example) the way the South did.
We’ve answered 301,998 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/whats-social-economic-differences-between-north-312325 |
4.25 | Back To CourseEconomics 102: Macroeconomics
15 chapters | 123 lessons
Jon has taught Economics and Finance and has an MBA in Finance
In the late 1700s, the famous economist Adam Smith wrote this in the second chapter of his book The Wealth of Nations:
'It is the maxim of every prudent master of a family, never to attempt to make at home what it will cost him more to make than to buy...What is prudence in the conduct of every private family, can scarce be folly in that of a great kingdom.'
He's observing two important principles of economics. The first one is that nations behave in the same way as individuals do: economically. Whatever is economical for people is also economical on a macroeconomic, or large, scale. The second thing he's observing is what we call the Law of Comparative Advantage. When a person or a nation has a lower opportunity cost in the production of a good, we say they have a comparative advantage in the production of that good.
Everyone has something that they can produce at a lower opportunity cost than others. This theory teaches us that a person or a nation should specialize in the good that they have a comparative advantage in.
So, let's explore this concept of comparative advantage using some examples from everyday life. For example, Sally can either produce 3 term papers in one hour or bake 12 chocolate chip cookies. Now let's add a second person, Adam, and talk about the same two activities, but as we'll see, Adam has different opportunity costs than Sally does. Adam is capable of producing either 8 term papers or 4 cookies in an hour. If we express both of these opportunity costs as equations, then we have:
For Sally, 3 term papers = 12 cookies.
For Adam, 8 term papers = 4 cookies.
We can ask two different questions about opportunity cost because we have two different goods. The first question we want to know is: what is the opportunity cost of producing 1 term paper? Reducing these equations down separately gives us:
For Sally, 1 term paper = 4 cookies. For Adam, 1 term paper = 0.5 cookies.
So, in this case, who has the lowest opportunity cost of producing 1 term paper? Adam does. Now, let's look at the same scenario from the opposite perspective and answer the second question: what is the opportunity cost of producing 1 cookie?
Now, I know that, in reality, no one is going to produce exactly 1 cookie unless it were a very, very big cookie, but when we reduce the equations down to 1 cookie, we can easily compare on an apples-to-apples basis (or cookie-to-cookie basis). So, let's take a look at the equations again:
For Sally, we have 12 cookies = 3 term papers.
For Adam, we have 4 cookies = 8 term papers.
Reducing these equations down gives us 1 cookie = 0.25 term papers for Sally, and for Adam 1 cookie = 2 term papers.
So, how do we decide who should produce term papers and who should be produce cookies? According to who has the lowest opportunity costs. That's what the law of comparative advantage says.
Who has the lowest opportunity cost of baking cookies? Sally does. Who has the lowest opportunity cost of producing term papers? Adam does.
So, we have two goods and two different people who have two different opportunity costs. The law of comparative advantage tells us that both of these people (Adam and Sally) will be better off if instead of both producing term papers and cookies, they decide to specialize in producing one good and trade with each other to obtain the other good.
This leads us to the conclusion that we should specialize. Individuals should specialize in the goods or services they produce. Firms and corporations should also specialize in what they have a lower opportunity cost of producing, and nations should specialize, as well. Whoever has the lowest cost relative to someone else can trade with them, and everyone gains something by trading.
Now that we've explored the law of comparative advantage, we need to make an important distinction. When a person or country has an absolute advantage, that means they can produce more of a good or service with the same amount of resources than other people or countries can. Another way to say it is they can produce it more cheaply than anybody else. This is a measure of how productive a person or country is when they produce a good or service. For example, let's say that country A can produce a ton of wheat in less time than any other nation with the same amount of resources. In this case, country A has an absolute advantage in the production of wheat.
Let's take another look at Sally and Adam, this time from the perspective of their labor productivity. As you can see, it takes Sally a 1/4 hour to produce 1 cookie, which is lower than the 1 hour that it takes Adam. Therefore, Sally is the most productive. She has an absolute advantage in the production of cookies.
In addition, it takes Sally 1 full hour to produce a term paper, while Adam can produce the same term paper in half the time - it's a 1/2 hour to produce a term paper for Adam; therefore, Adam has an absolute advantage in producing term papers.
But the theory of comparative advantage is based on lower opportunity costs, not based on absolute advantage. It is possible to have the absolute advantage in the production of two goods (in other words, you have the ability to make both goods the quickest, cheapest, and the best) and still benefit from trading with someone else who has a lower opportunity cost.
For example, let's say country A can either produce 10 cars or 10 computers. This means that they have the exact same opportunity costs for these two goods, and we can reduce this equation down to 1 car = 1 computer. Now, if country B can produce either 4 cars or 8 computers, then their opportunity cost of producing 1 computer is equal to 1/2 a car (after we reduce that equation down).
I want you to notice something, though. On the corresponding graph, you can see that country A can produce cars and computers better, faster and cheaper than country B because their production possibility curve is outward (or to the right) of country B's production possibility curve. But look at the slope of country B's curve. It is steeper. Even though country A has an absolute advantage in the production of cars and computers, it still makes sense for them to trade with country B, who has a lower opportunity cost of producing computers. (It's 1/2 a car!) So, the law of comparative advantage leads us to the conclusion that these two countries will trade with each other - cars for computers. Country B will specialize in computers (because they have the lowest opportunity cost in this) while country A will specialize in cars.
To summarize what we've learned in this lesson, the law of comparative advantage says that a person or a nation should specialize in the good they produce at the lowest opportunity cost. Everyone has something that they can produce at a lower opportunity cost than others, and by trading with others, everyone is better off.
To unlock this lesson you must be a Study.com Member.
Create your account
Did you know… We have over 49 college courses that prepare you to earn credit by exam that is accepted by over 2,000 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Back To CourseEconomics 102: Macroeconomics
15 chapters | 123 lessons | http://study.com/academy/lesson/comparative-advantaged-definition-and-examples.html |
4.09375 | Politics of California before 1900
Following the declaration of the independent California Republic in 1846, and the armed conquest of California by United States military forces and American volunteers during the Mexican-American War, California was administered by the U.S. military from 1846 to 1850. Local government continued to be run by alcades (mayors) in most places, as they had been under Mexican control; but now some were Americans.
The last military governor, Bennett Riley, 555 a constitutional convention to meet in Monterey in September 1849. Its 48 delegates were mostly pre-1846 American settlers; eight were Spanish-speaking Californios, who were being displaced by the Americans. The Convention unanimously outlawed slavery and set up an interim government which operated for 10 months before California was given official statehood by Congress on September 9, 1850 as part of the Compromise of 1850.
The state constitution first adopted in 1849, before statehood, remained the operational constitution following statehood, until it was superseded in 1879, after a new constitutional convention adopted a new state constitution. Both the original constitution and the 1879 constitution provided for the election of a governor, and set up a bicameral state legislature; the state Assembly was the larger house, with districts based on population, and the State Senate, the smaller house, with districts which followed existing political boundaries (analogous to the structure of the United States House of Representatives and the Senate). The terms of the governor and state senators was set at four years, and of the members of the Assembly at two years.
The location of the state capital was the subject of substantial political argument and debate. California situated its first capital in San Jose. The city did not have facilities ready for a proper capital, and the winter of 1850 - 1851 was unusually wet, causing the dirt roads to become muddy streams. The legislature was unsatisfied with the location, so former General and State Senator Mariano Guadalupe Vallejo donated land in the future city of Vallejo for a new capital; the legislature convened there for one week in 1852 and again for a month in 1853.
Again, the facilities available were unsuitable to house a state government, and the capital was soon moved three miles away to the little town of Benicia, located just inland from the San Francisco Bay on the Carquinez Strait. The strait links San Pablo Bay to Grizzly and Suisun Bays in the Sacramento River Delta area. A lovely brick statehouse was built in old American style complete with white cupola. Although strategically sited between the Mother Lode territory of the Sierra Nevada and the financial port of San Francisco, the site was too small for expansion. The capital was moved one last time, further inland past the Sacramento River Delta area to the riverside port of Sacramento.
Sacramento was the site of John Sutter's large agricultural colony and his fort. While the elder Sutter was away, the town of Sacramento was founded by John Augustus Sutter, Jr., at the edge of the Sacramento River and downhill from Sutter's Fort. Sutter Sr. was indignant since this new site, shaded by water-needy cottonwood trees, was often flooded during Spring high water. (Indeed, every hundred years or so, the whole Great Valley from Chico to Bakersfield, was one great freshwater sea. However, lots had already been sold, and there the town of Sacramento stayed. Interestingly at the end of the 19th century, the streets were raised a full story, so buildings in Old Town are now entered through what were once doors to the balconies shading the sidewalks below.
The current California State Capitol building in Sacramento (above) was constructed between 1861 and 1874 and is listed on the National Register of Historic Places.
Early state and local governments struggled with the massive influx of immigrants from around the world who came during the California Gold Rush. Efforts to establish basic government services and law and order were often overwhelmed. In 1851 and 1856, the rise of "Committees of Vigilance" (vigilantes) challenged local government as these vigilantes responded to a perceived lack of lawful authority and staged a number of public hangings and expulsions in San Francisco and elsewhere. While the stated intent was the punishment of criminals, the victims were often immigrants, including Irish and Chinese.
The Civil War
California fought on the Union side of the American Civil War. However, because of the distance factor, California played a minor role in the war. Although some settlers sympathized with the Confederacy, they were not allowed to organize and their newspapers were closed down. Former Senator William Gwin, a Confederate sympathizer, was arrested and fled to Europe to escape trial. Nearly all the California men who volunteered as soldiers for the Union Army stayed in the West to guard facilities. Some 2,350 men in the California Column marched east across Arizona in 1862 to expel the Confederates from Arizona and New Mexico. The Californians spent most of their time fighting hostile Indians and guarding the Southwest against a possible Confederate invasion.
Nativist sentiment and labor politics
After the Civil War ended in 1865, California continued to grow rapidly. Independent miners were displaced by large mining operations. Railroads began to be built, and both the railroad companies and the mining companies began to hire large numbers of laborers. The decisive event was the opening of the transcontinental Central Pacific railroad in 1869; six days by train brought a traveller from Chicago to San Francisco, compared to six months by ship.
Thousands of Chinese men arrived from Asia both to build the railroad and to prospect for gold. As resentment against foreigners grew, they were expelled from the mine fields. Most returned to China after the Central Pacific was built. Those who stayed mostly moved to the Chinatown in San Francisco and a few other cities, where they were relatively safe from violent attacks they suffered elsewhere.
From 1850 through 1899, anti-Chinese nativist sentiment resulted in the passage of many laws, many of which remained in effect well into the middle of the 20th century. The most flagrant episode was probably the creation and ratification of a new state constitution in 1879. Thanks to vigorous lobbying by the anti-Chinese Workingmen's Party, led by Dennis Kearney (an immigrant from Ireland), Article XIX, section 4 forbade corporations from hiring Chinese coolies, and empowered all California cities and counties to completely expel Chinese persons or to limit where they could reside. It was repealed in 1952.
The 1879 constitutional convention also dispatched a message to Congress pleading for strong immigration restrictions, which led to the passage of the Chinese Exclusion Act in 1882. The Act was upheld by the U.S. Supreme Court in 1889, and it would not be repealed by Congress until 1943. Similar sentiments led to the development of a Gentlemen's Agreement with Japan, by which Japan voluntarily agreed to restrict emigration to the United States. California also passed an Alien Land Act which barred aliens, especially Asians, from holding title to land. Because it was difficult for people born in Asia to obtain U.S. citizenship until the 1960s, land ownership titles were held by their American-born children, who were full citizens. The law was overturned by the California Supreme Court as unconstitutional in 1952.
In 1886, when a Chinese laundry owner challenged the constitutionality of a San Francisco ordinance clearly designed to drive Chinese laundries out of business, the U.S. Supreme Court ruled in his favor, and in doing so, laid the theoretical foundation for modern equal protection constitutional law. See Yick Wo v. Hopkins, 118 U.S. 356 (1886). Meanwhile, even with severe restrictions on Asian immigration, tensions between immigrant workers and native-born laborers persisted.
Tensions in California politics
From the post-Civil War period up to 1899, perhaps the predominant tension in California politics was the struggle between a controlling group of railroad companies and large businesses, on the one hand, and small farmers and businesses on the other. In an era when politics and politicians were routinely viewed as corrupt, the railroads, banks, and wealthy land-owners were seen to be able simply to purchase legislators and legislation to their liking.
While the excesses of this era led to the populist reforms of the early 20th century, the ability of this powerful group of interests to control California politics was seldom successfully challenged in the 19th century.
1898 saw the creation of the League of California Cities, modeled after similar associations which worked to combat corruption in city governments, and to advocate for city interests at the state level.
The following is a list of American governors of California, elected to office by 1899.
- Peter Burnett (1849-1851) Independent Democrat
- John McDougall (1851-1852) Independent Democrat
- John Bigler (1852-1856) Democrat
- J. Neeley Johnson (1856-1858) American (Know-Nothing)
- John Weller (1858-1860) Democrat
- Milton Latham (1860-1860) Lecompton Democrat
- John G. Downey (1860-1862) Lecompton Democrat
- Leland Stanford (1862-1863) Republican
- Frederick Low (1863-1867) Republican Unionist
- Henry Huntly Haight (1867-1871) Democrat
- Newton Booth (1871-1875) Republican
- Romualdo Pacheco (1875-1875) Republican
- William Irwin (1875-1880) Democrat
- George Perkins (1880-1883) Republican
- George Stoneman (1883-1887) Democrat
- Washington Bartlett (1887-1887) Democrat
- Robert Waterman (1887-1891) Republican
- Henry Markham (1891-1895) Republican
- James Budd (1895-1899) Democrat
- Henry Gage (1899-1903) Republican
- Politics of California
- History of California through 1899
- California Gold Rush
- California and the railroads
- Political party strength in California
- H.W. Brands. The Age of Gold: The California Gold Rush and the New American Dream (2003)
- Robert A. Burchell, The San Francisco Irish 1848-1880(1979)
- Robert A. Burchell, "The Loss of a Reputation; or, The Image of California in Britain before 1875," California Historical Quarterly 53 (Summer I974): 115-30, stories about Gold Rush lawlessness slowed immigration for two decades
- Burns, John F. and Richard J. Orsi, eds; Taming the Elephant: Politics, Government, and Law in Pioneer California University of California Press, 2003
- Drager, K., and Fracchia, C. (1997). The Golden Dream: California from Gold Rush to Statehood. Graphic Arts Center Publishing Company, Portland, OR. ISBN 1-55868-312-7.
- Ethington, Philip J.; The public city, the political construction of urban life in San Francisco, 1850-1900; University of Southern California Press, Los Angeles (1994) (ISBN 9780521415651 | ISBN 0-521-41565-9)
- Hunt, Aurora (1951). Army of the Pacific. Arthur Clark Company. External link in
- Lawrence Jelinek. Harvest Empire: A History of California Agriculture (1982) (ISBN 0-87835-131-0)
- McAfee, Ward. California's Railroad Era, 1850-1911 (1973)
- Olin, Spencer. California Politics, 1846-1920 (1981)
- Leonard Pitt. The Decline of the Californios: A Social History of the Spanish-Speaking Californians, 1846-1890 (1966) (ISBN 0-520-01637-8)
- Alexander Saxton. The Indispensable Enemy: Labor and the Anti-Chinese Movement in California (1971) (ISBN 0-520-02905-4)
- Kevin Starr, Americans and the California Dream, 1850-1915 (1986)
- Strobridge, William F. (1994). Regulars in the Redwoods, The U.S. Army in Northern California, 1852-1861. Arthur Clark Company. External link in
- Tutorow, Norman E. Leland Stanford Man of Many Careers(1971)
- Williams, R. Hal. The Democratic Party and California Politics, 1880-1896 (1973) | https://en.wikipedia.org/wiki/Politics_of_California_before_1900 |
4.1875 | Short Bowel Syndrome in Children
Short bowel syndrome is a condition in which the body cannot absorb enough fluids and nutrients because part of the small intestine is missing (usually due to prior surgery or illness), or is not working properly.
What does the small intestine do?
The small intestine is a part of the digestive system. The small intestine has three sections:
- The duodenum, which is located next to the stomach (shortest section)
- The jejunum, which lies between the duodenum and the ileum
- The ileum, which is the longest section and connects to the large intestine (colon)
(The ileocecal valve forms a barrier between the ileum and the large intestine to prevent the contents of the large intestine from flowing back into the small intestine.)
The small intestine is where the absorption of fluids, proteins, carbohydrates (starches and sugars), iron, fats, vitamins, and minerals (such as calcium, sodium, and potassium) takes place. If the duodenum and a portion of the jejunum have been removed by surgery, the ileum can take on their role in absorbing nutrients. But if a substantial part of the jejunum or the ileum is removed, it is more difficult to obtain adequate nutrition. In these cases, nutrients usually have to be provided in a form other than food.
Children need more calories than adults because they are still growing. If a child is born with portions of the small intestine missing, it can lead to serious problems.
What causes short bowel syndrome?
Short bowel syndrome can occur as a congenital (present at birth) condition. For example, the small intestine might be abnormally short at birth, a section of the bowel might be missing, or the bowel does not form completely before birth (intestinal atresia).
In other cases, patients develop conditions in which a large section of the small intestine has to be removed by surgery. In newborns, especially premature infants, necrotizing enterocolitis (the inflammation and loss of blood flow to the intestine, leading to severe damage) is the most common cause of short bowel syndrome.
Other causes include:
- Crohn's disease (the intestine becomes inflamed and scarred); in this condition SBS occurs primarily in patients who have undergone extensive surgery to the small bowel
- Intussusception (part of the intestine is folded into another part and compromises the blood flow to the involved portion of the intestine)
- Injury to the intestine due to:
- volvulus (twisting of the intestine)
- trauma (injury)
- gastroschisis (when the intestines develop outside the body prior to birth)
- narrowing or obstruction of the intestines
- blood clots or abnormal blood flow (ischemia) affecting the circulation to the intestine
What are the symptoms of short bowel syndrome?
Symptoms of short bowel syndrome include:
- Diarrhea. Watery diarrhea is the most common symptom of short bowel syndrome in infants and children.
- Excessive gas and/or foul-smelling stool
- Poor appetite
- Weight loss or inability to gain weight
Other complications can occur as a result of short bowel syndrome, including:
- Vitamin, mineral, and/or electrolyte shortage or imbalance
- Severe diaper rash caused by frequent diarrhea
- Abnormal eating habits
- Kidney stones or gallstones caused by abnormal calcium or bile absorption
- Bacterial overgrowth (high levels of bacteria in the intestine)
How is short bowel syndrome treated?
A variety of treatments may be required in order to treat short bowel syndrome. The patient has to change his or her diet in order to be able to absorb nutrients correctly. If a patient has had surgery to remove part of the small intestine, it is important to maintain a normal balance of electrolytes, fluids, and other nutrients to prevent dehydration, malnutrition, and other problems.
The patient may need total parenteral nutrition (TPN) after bowel surgery. TPN is a method of providing nourishment while bypassing the digestive system. TPN solutions contain a mixture of fluids and nutrients, such as protein, fats, sugars, and essential vitamins and minerals. The solutions are given intravenously (through a large vein into which a catheter, a flexible plastic tube, has been inserted.). TPN is given over 10 to 12 hours or sometimes longer; infants and children usually receive this type of solution while sleeping.
Some children must remain on TPN indefinitely. Serious complications can occur when this form of nutrition is used over the long term, such as infection at the site where the catheter is inserted, formation of blood clots, and liver damage.
Despite the risk of complications, TPN can be lifesaving in children and adults unable to take in appropriate nutrition through their gastrointestinal tract. In addition, recent changes in TPN regimens, when combined with starting feeds early on, may decrease the chance of developing long-term liver injury.
Over time, enteral nutrition can replace TPN in some patients. Enteral feeding is given through a gastric tube (g-tube) inserted in the stomach via a surgical incision, or placed using an endoscope. In some cases a nasogastric (NG) tube that passes from the nose into the stomach might be used instead of the g-tube. In other patients, use of a similar tube placed in the small intestine (jejunostomy tube or j-tube) is an alternative.
Some children may be able to receive small amounts of solid food and liquids in addition to enteral or parenteral (intravenous) feeding. This helps to allow babies and children to maintain the ability to chew and suck and helps them develop normal eating patterns in the future.
In some cases, patients who have had a part of their intestine removed by surgery undergo a process called intestinal adaptation. During intestinal adaptation, the intestine may grow in size after surgery. The surface area inside the intestine increases as the mucosa (lining of the intestine) becomes thicker. The villi (the lining of the intestine responsible for intestinal absorption) become longer and denser, helping to promote absorption of nutrients. The diameter of the intestine may also increase.
What medications are prescribed for short bowel syndrome?
Medications may be used to help slow the passage of food through the intestine. This allows more time for the nutrients to remain in contact with the cells lining the intestine, which improves absorption.
Anti-diarrhea drugs such as loperamide hydrochloride can be given to children, if recommended by their physician, with limited side effects. Since the stomach is likely to secrete greater amounts of acid during the recovery period, patients can take antacids or an anti-ulcer medication to treat or reduce the risk of stomach ulcers. Antibiotics may be prescribed on occasion to prevent or treat bacterial overgrowth.
Newer medications like Teduglutide, a medication given via injection have been approved in adults with SBS but not in children by FDA. These medications can help reduce TPN requirement in some SBS patients, but will require study in children and have potential significant side effects.
When is surgery needed for short bowel syndrome?
Surgical options can be explored in certain situations in SBS.
In some cases surgery can be done to improve the functional length of intestine by various procedures such as Bianchi procedure and serial transverse enteroplasty procedure (STEP). These procedures may also help decrease the chance of bacterial overgrowth by narrowing the diameter of the intestine which may be increased in cases of SBS. The decision if surgery will help improve an individual patient’s functional status is made on a case by case basis and in many cases is dependent on the length of small intestine that is now present as well as the underlying cause of the SBS.
Intestinal transplantation involves placing a donor small intestine into the patient. This may be considered for patients who cannot use their intestine to absorb food and fluids, are entirely dependent on TPN and are at risk of losing access sites for intravenous nutrition. Liver transplantation may be required at the same time, if patients develop irreversible liver disease from long term TPN use. Since very few centers perform intestinal transplant, an early referral to a transplant center helps in planning the transplant prior to the development of major complications.
What is the prognosis (outlook) for patients who have short bowel syndrome?
The prognosis for infants and children who have short bowel syndrome can be good, depending on the residual length of the intestine. However, they will need lifelong follow-up care. Children need to be closely monitored for any nutritional deficiencies or other conditions that may result from continued use of enteral or parenteral nutrition. Overall, quality of life and chance of coming off TPN have improved with better understanding of the problem and preventing complications.
The main causes of death among infants and children who receive parenteral nutrition are infections and disorders of the liver and biliary tract (the pathway by which bile flows from the liver to the small intestine). Infectious complications may become less frequent with recent advances in protocols for the care of central lines in children. Cholestasis (a condition where bile cannot flow from the liver to the duodenum) is a common complication following long-term use of total parenteral nutrition. Newer approaches to managing TPN have helped to decrease the chance of cholestasis and associated liver disease.
A team approach involving pediatric gastroenterologists, pediatric dietitians, pharmacists, pediatric surgeons and transplant surgeons, and social workers is vital in managing and improving outcomes in patients with short bowel syndrome.
© Copyright 1995-2016 The Cleveland Clinic Foundation. All rights reserved.
Can't find the health information you’re looking for?
This information is provided by the Cleveland Clinic and is not intended to replace the medical advice of your doctor or health care provider. Please consult your health care provider for advice about a specific medical condition. This document was last reviewed on: 1/20/2015…#14725 | https://my.clevelandclinic.org/childrens-hospital/health-info/diseases-conditions/hic-short-bowel-syndrome-in-children |
4.0625 | Come enjoy the Academy for free this Sunday, February 7.
Little things can go a long way. We demonstrated that yesterday describing how zebras have evolved stripes to protect them from small biting flies, not large carnivorous predators. That’s nothing compared to today’s story about little things: new research suggests that small microbes, not large asteroids or volcanoes, were responsible for the largest mass extinction on our planet.
The end-Permian mass extinction, or “Great Dying,” occurred 252 million years ago, wiping out as many as 70% of terrestrial vertebrates and perhaps 90% of all marine species. As significant as the event was, pinpointing the actual culprit has been difficult and controversial.
Researchers at the Massachusetts Institute of Technology (MIT) investigated the cause by first looking at the exponential increase of carbon dioxide experienced by the oceans at that point in geologic time. Previously, researchers suggested that the increase in CO2 originated from the volcanic eruptions that produced the Siberian Traps, a vast formation of volcanic rock shaped by the most extensive eruptions in Earth’s geological record. But calculations by the MIT team show that these eruptions were not nearly sufficient to account for the carbon seen in sediments dating back to that epoch. In addition, the observed changes in the amount of carbon over time don’t fit the volcanic model.
“A rapid initial injection of CO2 from a volcano would be followed by a gradual decrease,” says MIT postdoc Gregory Fournier. “Instead, we see the opposite: a rapid, continuing increase. That suggests a microbial expansion.” The growth of microbial populations is among the few phenomena capable of increasing carbon production exponentially.
The team then focused their attention on the methane-producing archaea called Methanosarcina. Using genetic analysis, the scientists determined a gene transfer from another microbe occurred right around the time of the end-Permian extinction. (Previous studies had only placed this event sometime in the last 400 million years, but the researchers refined that estimate to a date near the end of the Permian.) Given the right conditions, this genetic acquisition set the stage for the microbe to undergo a dramatic growth spurt, rapidly consuming a vast reserve of organic carbon in the ocean sediments.
The researchers propose that as Methanosarcina suddenly bloomed explosively in the oceans, they released prodigious amounts of methane into the atmosphere and dramatically changing the climate and the chemistry of the oceans. And the planet forever, as Christopher Joyce reports on NPR, “the extinction kind of rebooted life on Earth.”
The Academy’s Curator of Geology, Peter Roopnarine, chimed in via email: “The paper represents a significant step forward in our understanding the dramatic changes which took place in the carbon cycle at the very end of the Permian… and how those changes unfolded.” But he remains skeptical: among other things, evidence suggests that the extinction event occurred in multiple phases, inconsistent with the MIT team’s hypothesis, and although the researchers narrowed down the timing of the aforementioned gene transfer, they can only say it occurred within tens of millions of years of the end-Permian extinction. “The paper presents a very insightful and sophisticated analysis of the carbon problem at the end of the Permian, but I find the link between that and the methanogenic microbes to be speculative and unnecessary.”
And so the controversy continues!
The research is published this week in Proceedings of the National Academy of Sciences. | http://www.calacademy.org/explore-science/small-culprits-in-largest-extinction/ |
4.03125 | KQED , Teachers' Domain
Video length 3:01 min.Learn more about Teaching Climate Literacy and Energy Awareness»
See how this Video supports the Next Generation Science Standards»
Middle School: 6 Disciplinary Core Ideas
About Teaching Climate Literacy
Other materials addressing 3a
Other materials addressing 3c
Notes From Our Reviewers
The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness.
Read what our review team had to say about this resource below or learn more about
how CLEAN reviews teaching materials
Teaching Tips | Science | Pedagogy |
- Inclusion of a food web diagram would enhance understanding.
- Teachers of younger students may wish to introduce the more advanced vocabulary.
About the Science
- As apex predators, polar bears, walruses, Arctic foxes, and beluga whales are all affected by changes in the Arctic food web as a result of climate change.
- Missing a food web diagram that could explain the effect of changes in predator populations.
- Passed initial science review - expert science review pending.
About the Pedagogy
- Supplemental materials, including lesson plans, educator guide, background articles, a student workbook, and a curriculum guide are available at the KQED site: http://www.kqed.org/education/educators/clue-into-climate/ecosystems.jsp.
Next Generation Science Standards See how this Video supports:
Disciplinary Core Ideas: 6
MS-LS2.C1:Ecosystems are dynamic in nature; their characteristics can vary over time. Disruptions to any physical or biological component of an ecosystem can lead to shifts in all its populations.
MS-LS2.C2:Biodiversity describes the variety of species found in Earth’s terrestrial and oceanic ecosystems. The completeness or integrity of an ecosystem’s biodiversity is often used as a measure of its health
MS-LS4.D1:Changes in biodiversity can influence humans’ resources, such as food, energy, and medicines, as well as ecosystem services that humans rely on—for example, water purification and recycling.
MS-ESS2.D1:Weather and climate are influenced by interactions involving sunlight, the ocean, the atmosphere, ice, landforms, and living things. These interactions vary with latitude, altitude, and local and regional geography, all of which can affect oceanic and atmospheric flow patterns.
MS-ESS3.C1:Human activities have significantly altered the biosphere, sometimes damaging or destroying natural habitats and causing the extinction of other species. But changes to Earth’s environments can have different impacts (negative and positive) for different living things.
MS-ESS3.D1:Human activities, such as the release of greenhouse gases from burning fossil fuels, are major factors in the current rise in Earth’s mean surface temperature (global warming). Reducing the level of climate change and reducing human vulnerability to whatever climate changes do occur depend on the understanding of climate science, engineering capabilities, and other kinds of knowledge, such as understanding of human behavior and on applying that knowledge wisely in decisions and activities. | http://cleanet.org/resources/43796.html |
4.125 | Memory Encoding in Children
To form memories, humans must create synapses, or connections between brain cells, that encode sensory information from an event into our memory. From there, our brains organize that information into categories and link it to other similar data, which is called consolidation. In order for that memory to last, we must periodically retrieve these memories and retrace those initial synapses, reinforcing those connections.
Studies have largely refuted the long-held thinking that babies cannot encode information that forms the foundation of memories. For instance, in one experiment involving 2- and 3-month-old infants, the babies' legs were attached by a ribbon to a mobile [source: Hayne]. By kicking their legs, the babies learned that the motion caused the mobile to move. Later, placed under the same mobile without the ribbon, the infants remembered to kick their legs. When the same experiment was performed with 6-month-olds, they picked up the kicking relationship much more quickly, indicating that their encoding ability must accelerate gradually with time, instead of in one significant burst around 3 years old.
This memory encoding could relate to a baby's development of the prefrontal cortex at the forehead. This area, which is active during the encoding and retrieval of explicit memories, is not fully functional at birth [source: Newcombe et al]. However, by 24 months, the number of synapses in the prefrontal cortex has reached adult levels [source: Bauer].
Also, the size of the hippocampus at the base of the brain steadily grows until your second or third year [source: Bauer]. This is important because the hippocampus determines what sensory information to transfer into long-term storage.
But what about implicit memory? Housed in the cerebellum, implicit memory is essential for newborns, allowing them to associate feelings of warmth and safety with the sound of their mother's voice and instinctively knowing how to feed. Confirming this early presence, studies have revealed few developmental changes in implicit memory as we age [source: Newcombe et al]. Even in many adult amnesia cases, implicit skills such as riding a bicycle or playing a piano often survive the brain trauma.
Now we know that babies have a strong implicit memory and can encode explicit ones as well, which indicates that childhood amnesia may stem from faulty explicit memory retrieval. Unless we're thinking specifically about a past event, it takes some sort of cue to prompt an explicit memory in all age groups [source: Bauer]. Up next, find out what those cues are. | http://science.howstuffworks.com/life/inside-the-mind/human-brain/remember-birth1.htm |
4 | Date: ca. 2nd–early 3rd century A.D.
Geography: From Mesopotamia, Hatra
Dimensions: L. 172.1 cm
Credit Line: Purchase, Joseph Pulitzer Bequest, 1932
Accession Number: 32.145a, b
Under Alexander the Great (336–323 B.C.), the Greeks put an end to Achaemenid power, and an era of strong Greek influence in the ancient Near East began. Babylon, Susa, and Persepolis fell to the armies of Alexander in 331 B.C., and his power extended as far as India. But in 323 B.C., while still a young man, Alexander became ill and died in Babylon. Deprived of his leadership, the empire was split by a struggle for power among his successors, the Seleucid kings.
The Parthian dynasty, originally from the north and east of Iran, established supremacy in the Near East in the second century B.C., after the disintegration of Alexander's empire and collapse of his successors. Ctesiphon, the capital, was situated on the bank of the Tigris River opposite the earlier Greek settlement of Seleucia. The border between the western empire of Rome and the Parthian lands in the east ran between the central and northern Euphrates and Tigris rivers. Hatra in northern Iraq, southwest of modern Mosul, was a major trading city heavily fortified against Roman attack and populated by a mixture of peoples, Parthians as well as Arabs and the inhabitants of Syria.
Once part of a decorated doorway in the north hall of the so-called Main Palace at Hatra, this lintel stone was originally positioned so that the carved surface faced the floor. The two fantastic creatures have feline bodies, long ears, wings, and crest feathers—a combination of animal and bird elements typical of Near Eastern lion-griffins. Between the two figures is a vase containing a stylized lotus leaf and two tendrils. The naturalistic modeling of the creatures' bodies and the form of the central vase reflect Roman influence. However, the absolute symmetry of the composition, the pronounced simplification of the plant forms, and the lion-griffin motif are all characteristic of the Near East. | http://metmuseum.org/toah/works-of-art/32.145a,b/ |
4.28125 | The resource has been added to your collection
This worksheets leads students to discover the differences between linear equations and identities. The well formulated questions force students to employ critical thinking. The activity involves both algebraic and graphing approaches, and definitely emphasizes a conceptual understanding of the material.website: http://www.mathedpage.org/ copyright information: http://www.mathedpage.org/rights.html
This resource has not yet been reviewed.
Not Rated Yet. | http://www.curriki.org/oer/Equations-versus-Identities-Understanding-Solutions-to-Linear-Equations-/ |
4.40625 | History of Poland during the Jagiellonian dynasty
Part of a series on the
|History of Poland|
|Prehistory and protohistory|
History of Poland during the Jagiellonian dynasty is the period in the history of Poland that spans the late Middle Ages and early Modern Era. Beginning with the Lithuanian Grand Duke Jogaila (Władysław II Jagiełło), the Jagiellonian dynasty (1386–1572) formed the Polish–Lithuanian union. The partnership brought vast Lithuania-controlled Rus' areas into Poland's sphere of influence and proved beneficial for the Poles and Lithuanians, who coexisted and cooperated in one of the largest political entities in Europe for the next four centuries.
In the Baltic Sea region Poland's struggle with the Teutonic Knights continued and included the Battle of Grunwald (1410) and in 1466 the milestone Peace of Thorn under King Casimir IV Jagiellon; the treaty created the future Duchy of Prussia. In the south Poland confronted the Ottoman Empire and the Crimean Tatars, and in the east helped Lithuania fight the Grand Duchy of Moscow. Poland's and Lithuania's territorial expansion included the far north region of Livonia.
Poland was developing as a feudal state, with predominantly agricultural economy and an increasingly dominant landed nobility component. The Nihil novi act adopted by the Polish Sejm (parliament) in 1505, transferred most of the legislative power from the monarch to the Sejm. This event marked the beginning of the period known as "Golden Liberty", when the state was ruled by the "free and equal" Polish nobility.
Protestant Reformation movements made deep inroads into the Polish Christianity, which resulted in unique at that time in Europe policies of religious tolerance. The European Renaissance currents evoked in late Jagiellonian Poland (kings Sigismund I the Old and Sigismund II Augustus) an immense cultural flowering.
- 1 Late Middle Ages (14th–15th century)
- 1.1 Jagiellonian monarchy
- 1.2 Social and economic developments
- 1.3 Poland and Lithuania in personal union under Jagiełło
- 1.4 Struggle with the Teutonic Knights
- 1.5 Hussite movement; Polish–Hungarian union
- 1.6 Casimir IV Jagiellon
- 1.7 War with the Teutonic Order and its resolution
- 1.8 Turkish and Tatar wars
- 1.9 Moscow's threat to Lithuania; Sigismund I
- 1.10 Culture in the Late Middle Ages
- 2 Early Modern Era (16th century)
- 2.1 Agriculture-based economic expansion
- 2.2 Burghers and nobles
- 2.3 Reformation
- 2.4 Culture of Polish Renaissance
- 2.5 Republic of middle nobility; execution movement
- 2.6 Resources and strategic objectives
- 2.7 Prussia; struggle for Baltic area domination
- 2.8 Wars with Moscow
- 2.9 The Jagiellons and the Habsburgs; Ottoman Empire expansion
- 2.10 Livonia; struggle for Baltic area domination
- 2.11 Poland and Lithuania in real union under Sigismund II
- 2.12 The Commonwealth: multicultural, magnate dominated
- 2.13 Jewish settlement
- 3 See also
- 4 Notes
- 5 References
- 6 Further reading
Late Middle Ages (14th–15th century)
In 1385 the Union of Krewo was signed between Queen Jadwiga of Poland and Jogaila, the Grand Duke of Lithuania, the last pagan state in Europe. The act arranged for Jogaila's baptism (after which Jogaila was known in Poland by his baptismal name, Władysław, and the Polish version of his Lithuanian name, Jagiełło) (Zamoyski, the Polish Way) and for the couple's marriage and constituted the beginning of the Polish–Lithuanian union. The Union strengthened both nations in their shared opposition to the Teutonic Knights and the growing threat of the Grand Duchy of Moscow.
Vast expanses of Rus' lands, including the Dnieper River basin and extending south to the Black Sea, were at that time under Lithuanian control. Lithuania fought the invading Mongols and had taken advantage of the power vacuum in the south and east resulting from the Mongol destruction of Kievan Rus'. The population of the Grand Duchy's enlarged territory was accordingly heavily Ruthenian and Eastern Orthodox. The territorial expansion caused Lithuania's confrontation with the Grand Duchy of Moscow, which found itself emerging from the Tatar rule and in a expanding .
Uniquely in Europe, the union connected two states geographically located on the opposite sides of the great civilizational divide between the Western or Latin, and the Eastern or Byzantine worlds. The consequences of this fact would be felt throughout the history of the region that, at the time of the Union of Krewo, comprised Poland and Lithuania.
The Union's intention was to create a common state under King Władysław Jagiełło, but the Polish ruling oligarchy's idea of incorporation of Lithuania into Poland turned out to be unrealistic. There were going to be territorial disputes and warfare between Poland and Lithuania or Lithuanian factions; the Lithuanians at times had even found it expedient to conspire with the Teutonic Knights against the Poles. Geographic consequences of the dynastic union and the preferences of the Jagiellonian kings accelerated the process of reorientation of Polish territorial priorities to the east.
Between 1386 and 1572 Poland and Lithuania, joined until 1569 by a personal union, were ruled by a succession of constitutional monarchs of the Jagiellonian dynasty. The political influence of the Jagiellonian kings was diminishing during this period, which was accompanied by the ever increasing role in central government and national affairs of landed nobility.[a] The royal dynasty however had a stabilizing effect on Poland's politics. The Jagiellonian Era is often regarded as a period of maximum political power, great prosperity, and in its later stage, the Golden Age of Polish culture.
Social and economic developments
The 13th and 14th century feudal rent system, under which each estate had well defined rights and obligations, degenerated around the 15th century, as the nobility tightened their control of the production, trade and other economic activities, created many directly owned agricultural enterprises known as folwarks (feudal rent payments were being replaced with forced labor on lord's land), limited the rights of the cities and pushed most of the peasants into serfdom. Such practices were increasingly sanctioned by the law. For example, the Piotrków Privilege of 1496, granted by King Jan Olbracht, banned rural land purchases by townspeople and severely limited the ability of peasant farmers to leave their villages. Polish towns, lacking national representation protecting their class interests, preserved some degree of self-government (city councils and jury courts), and the trades were able to organize and form guilds. The nobility soon excused themselves from their principal duty – mandatory military service in case of war (pospolite ruszenie). The nobility's split into two main layers was institutionalized (never legally formalized) in the Nihil novi "constitution" of 1505, which required the king to consult general sejm, that is the Senate (highest level officials), as well as the lower chamber of (regional) deputies, the Sejm proper, before enacting any changes. The masses of ordinary szlachta competed or tried to compete against the uppermost rank of their class, the magnates, for the duration of Poland's independent existence.
Poland and Lithuania in personal union under Jagiełło
The first king of the new dynasty was the Grand Duke of Lithuania Jogaila, or Władysław II Jagiełło as the King of Poland. He was elected a king of Poland in 1386, after becoming a Catholic Christian and marrying Jadwiga of Anjou, daughter of Louis I, who was Queen of Poland in her own right. Latin Rite Christianization of Lithuania followed. Jogaila's rivalry in Lithuania with his cousin Vytautas, opposed to Lithuania's domination by Poland, was settled in 1392 and in 1401 in the Union of Vilnius and Radom: Vytautas became the Grand Duke of Lithuania for life under Jogaila's nominal supremacy. The agreement made possible close cooperation between the two nations, necessary to succeed in the upcoming struggle with the Teutonic Order. The Union of Horodło (1413) specified the relationship further and had granted privileges to the Roman Catholic (as opposed to Eastern Orthodox) portion of Lithuanian nobility.
Struggle with the Teutonic Knights
The Great War of 1409–1411, precipitated by the Lithuanian uprising in the Order controlled Samogitia, included the Battle of Grunwald (Tannenberg), where the Polish and Lithuanian-Rus' armies completely defeated the Teutonic Knights. The offensive that followed lost its impact with the ineffective siege of Malbork (Marienburg). The failure to take the fortress and eliminate the Teutonic (later Prussian) state had for Poland dire historic consequences in the 18th, 19th and 20th centuries. The Peace of Thorn (1411) had given Poland and Lithuania rather modest territorial adjustments, including Samogitia. Afterwards there were negotiations and peace deals that didn't hold, more military campaigns and arbitrations. One attempted, unresolved arbitration took place at the Council of Constance. There in 1415, Paulus Vladimiri, rector of the Kraków Academy, presented his Treatise on the Power of the Pope and the Emperor in respect to Infidels, in which he advocated tolerance, criticized the violent conversion methods of the Teutonic Knights, and postulated that pagans have the right to peaceful coexistence with Christians and political independence. This stage of the Polish-Lithuanian conflict with the Teutonic Order ended with the Treaty of Melno in 1422. Another war (see Battle of Pabaiskas) was concluded in the Peace of Brześć Kujawski in 1435.
Hussite movement; Polish–Hungarian union
During the Hussite Wars (1420–1434), Jagiełło, Vytautas and Sigismund Korybut were involved in political and military maneuvering concerning the Czech crown, offered by the Hussites first to Jagiełło in 1420. Zbigniew Oleśnicki became known as the leading opponent of a union with the Hussite Czech state.
The Jagiellonian dynasty was not entitled to automatic hereditary succession, as each new king had to be approved by nobility consensus. Władysław Jagiełło had two sons late in life from his last wife, Sophia of Halshany. In 1430 the nobility agreed to the succession of the future Władysław III, only after the King gave in and guaranteed the satisfaction of their new demands. In 1434 the old monarch died and his minor son Władysław was crowned; the Royal Council led by Bishop Oleśnicki undertook the regency duties.
In 1438 the Czech anti-Habsburg opposition, mainly Hussite factions, offered the Czech crown to Jagiełło's younger son Casimir. The idea, accepted in Poland over Oleśnicki's objections, resulted in two unsuccessful Polish military expeditions to Bohemia.
After Vytautas' death in 1430 Lithuania became embroiled in internal wars and conflicts with Poland. Casimir, sent as a boy by King Władysław on a mission there in 1440, was surprisingly proclaimed by the Lithuanians a Grand Duke of Lithuania, and stayed in Lithuania.
Oleśnicki gained the upper hand again and pursued his long-term objective of Poland's union with Hungary. At that time Turkey embarked on a new round of European conquests and threatened Hungary, which needed the powerful Polish–Lithuanian ally. Władysław III in 1440 assumed also the Hungarian throne. Influenced by Julian Cesarini, the young king led the Hungarian army against the Ottoman Empire in 1443 and again in 1444. Like his mentor, Władysław Warneńczyk was killed at the Battle of Varna.
Beginning toward the end of Jagiełło's life, Poland was practically governed by a magnate oligarchy led by Oleśnicki. The rule of the dignitaries was actively opposed by various szlachta groups. Their leader Spytek of Melsztyn was killed during an armed confrontation in 1439, which allowed Oleśnicki to purge Poland of the remaining Hussite sympathizers and pursue his other objectives without significant opposition.
Casimir IV Jagiellon
In 1445 Casimir, the Grand Duke of Lithuania, was asked to assume the Polish throne vacated by the death of his brother Władysław. Casimir was a tough negotiator and did not accept the Polish nobility's conditions for his election. He finally arrived in Poland and was crowned in 1447 on his terms. Becoming a King of Poland Casimir also freed himself from the control the Lithuanian oligarchy had imposed on him; in the Vilnius Privilege of 1447 he declared the Lithuanian nobility having equal rights with Polish szlachta. In time Kazimierz Jagiellończyk was able to remove from power Cardinal Oleśnicki and his group, basing his own power on the younger middle nobility camp instead. A conflict with the pope and the local Church hierarchy over the right to fill vacant bishop positions Casimir also resolved in his favor.
War with the Teutonic Order and its resolution
In 1454 the Prussian Confederation, an alliance of Prussian cities and nobility opposed to the increasingly oppressive rule of the Teutonic Knights, asked King Casimir to take over Prussia and stirred up an armed uprising against the Knights. Casimir declared a war on the Order and a formal incorporation of Prussia into the Polish Crown; those events led to the Thirteen Years' War. The weakness of pospolite ruszenie (the szlachta wouldn't cooperate without new across-the-board concessions from Casimir) prevented a takeover of all of Prussia, but in the Second Peace of Thorn (1466) the Knights had to surrender the western half of their territory to the Polish Crown (the areas known afterwards as Royal Prussia, a semi-autonomous entity), and to accept Polish-Lithuanian suzerainty over the remainder (the later Ducal Prussia). Poland regained Pomerelia and with it the all-important access to the Baltic Sea, as well as Warmia. In addition to land warfare, naval battles had taken place, where ships provided by the City of Danzig (Gdańsk) successfully fought Danish and Teutonic fleets.
Other 15th-century Polish territorial gains, or rather revindications, included the Duchy of Oświęcim and Duchy of Zator on Silesia's border with Lesser Poland, and there was notable progress regarding the incorporation of the Piast Masovian duchies into the Crown.
Turkish and Tatar wars
The influence of the Jagiellonian dynasty in Central Europe had been on the rise. In 1471 Casimir's son Władysław became a king of Bohemia, and in 1490 also of Hungary. The southern and eastern outskirts of Poland and Lithuania became threatened by Turkish invasions beginning in the late 15th century. Moldavia's involvement with Poland goes back to 1387, when Petru I, Hospodar of Moldavia, seeking protection against the Hungarians, paid Jagiełło homage in Lviv, which gave Poland access to the Black Sea ports. In 1485 King Casimir undertook an expedition into Moldavia, after its seaports were overtaken by the Ottoman Turks. The Turkish controlled Crimean Tatars raided the eastern territories in 1482 and 1487, until they were confronted by King Jan Olbracht (John Albert), Casimir's son and successor. Poland was attacked in 1487–1491 by remnants of the Golden Horde. They had invaded into Poland as far as Lublin before being beaten at Zaslavl. King John Albert in 1497 made an attempt to resolve the Turkish problem militarily, but his efforts were unsuccessful as he was unable to secure effective participation in the war by his brothers, King Ladislaus II of Bohemia and Hungary and Alexander, the Grand Duke of Lithuania, and because of the resistance on the part of Stephen the Great, the ruler of Moldavia. More Ottoman Empire-instigated destructive Tatar raids took place in 1498, 1499 and 1500. John Albert's diplomatic peace efforts that followed were finalized after the king's death in 1503, resulting in a territorial compromise and an unstable truce.
Moscow's threat to Lithuania; Sigismund I
Lithuania was increasingly threatened by the growing power of the Grand Duchy of Moscow. Through the campaigns of 1471, 1492 and 1500 Moscow took over much of Lithuania's eastern possessions. The Grand Duke Alexander was elected King of Poland in 1501, after the death of John Albert. In 1506 he was succeeded by Sigismund I the Old (Zygmunt I Stary) in both Poland and Lithuania, as the political realities were drawing the two states closer together. Prior to that Sigismund had been a Duke of Silesia by the authority of his brother Ladislaus II of Bohemia, but like other Jagiellon rulers before him, he had not pursued the Polish Crown's claim to Silesia.
Culture in the Late Middle Ages
The culture of the 15th century Poland was mostly medieval. Under favorable social and economic conditions the crafts and industries in existence already in the preceding centuries became more highly developed, and their products were much more widespread. Paper production was one of the new industries, and printing developed during the last quarter of the century. In 1473 Kasper Straube produced in Kraków the first Latin print, in 1475 in Wrocław (Breslau) Kasper Elyan printed for the first time in Polish, and after 1490 from Schweipolt Fiol's shop in Kraków came the world's oldest prints in Cyrillic, namely Old Church Slavonic language religious texts.
Luxury items were in high demand among the increasingly prosperous nobility, and to a lesser degree among the wealthy town merchants. Brick and stone residential buildings became common, but only in cities. The mature Gothic style was represented not only in architecture, but also prominently in sacral wooden sculpture. The altar of Veit Stoss in St. Mary's Church in Kraków is one of the most magnificent in Europe art works of its kind.
The Kraków University, which stopped functioning after the death of Casimir the Great, was renewed and rejuvenated around 1400. Augmented by a theology department, the "academy" was supported and protected by Queen Jadwiga and the Jagiellonian dynasty members, which is reflected in its present name. Europe's oldest department of mathematics and astronomy was established in 1405. Among the university's prominent scholars were Stanisław of Skarbimierz, Paulus Vladimiri and Albert of Brudzewo, Copernicus' teacher.
The precursors of Polish humanism, John of Ludzisko and Gregory of Sanok, were professors at the university. Gregory's court was the site of an early literary society at Lwów (Lviv), after he had become the archbishop there. Scholarly thought elsewhere was represented by Jan Ostroróg, a political publicist and reformist, and Jan Długosz, a historian, whose Annals is the largest in Europe history work of his time and a fundamental source for history of medieval Poland. There were also active in Poland distinguished and influential foreign humanists. Filippo Buonaccorsi, a poet and diplomat, who arrived from Italy in 1468 and stayed in Poland until his death in 1496, established in Kraków another literary society. Known as Kallimach, he wrote the lives of Gregory of Sanok, Zbigniew Oleśnicki, and very likely that of Jan Długosz. He tutored and mentored the sons of Casimir IV and postulated unrestrained royal power. Conrad Celtes, a German humanist, organized in Kraków the first in this part of Europe humanist literary and scholarly association Sodalitas Litterarum Vistulana.
Early Modern Era (16th century)
Agriculture-based economic expansion
The folwark, a serfdom based large-scale farm and agricultural business, was a dominant feature on Poland's economic landscape beginning in the late 15th century and for the next 300 years. This dependence on nobility-controlled agriculture diverged the ways of central-eastern Europe from those of the western part of the continent, where, in contrast, elements of capitalism and industrialization were developing to a much greater extent than in the East, with the attendant growth of the bourgeoisie class and its political influence. The combination of the 16th century agricultural trade boom in Europe, with the free or cheap peasant labor available, made during that period the folwark economy very profitable.
The 16th century saw also further development of mining and metallurgy, and technical progress took place in various commercial applications. Great quantities of exported agricultural and forest products floated down the rivers and transported by land routes resulted in positive trade balance for Poland throughout the 16th century. Imports from the West included industrial and luxury products and fabrics.
Most of the grain exported was leaving Poland through Danzig (Gdańsk), which because of its location at the terminal point of the Vistula and its tributaries waterway and of its Baltic seaport trade role became the wealthiest, most highly developed, and most autonomous of the Polish cities. It was also by far the largest center of crafts and manufacturing. Other towns were negatively affected by Danzig's near-monopoly in foreign trade, but profitably participated in transit and export activities. The largest of them were Kraków (Cracow), Poznań, Lwów (Lviv), and Warszawa (Warsaw), and outside of the Crown, Breslau (Wrocław). Thorn (Toruń) and Elbing (Elbląg) were the main, after Danzig, cities in Royal Prussia.
Burghers and nobles
During the 16th century, prosperous patrician families of merchants, bankers, or industrial investors, many of German origin, still conducted large-scale business operations in Europe or lent money to Polish noble interests, including the royal court. Some regions were relatively highly urbanized, for example in Greater Poland and Lesser Poland at the end of the 16th century 30% of the population lived in cities. 256 towns were founded, most in Red Ruthenia.[b] The townspeople's upper layer was ethnically multinational and tended to be well-educated. Numerous burgher sons studied at the Academy of Kraków and at foreign universities; members of their group are among the finest contributors to the culture of Polish Renaissance. Unable to form their own nationwide political class, many, despite the legal obstacles, melted into the nobility.
The nobility or szlachta in Poland constituted a greater proportion (up to 10%) of the population, than in other European countries. In principle they were all equal and politically empowered, but some had no property and were not allowed to hold offices, or participate in sejms or sejmiks, the legislative bodies. Of the "landed" nobility some possessed a small patch of land which they tended themselves and lived like peasant families (mixed marriages gave some peasants one of the few possible paths to nobility), while the magnates owned dukedom-like networks of estates with several hundred towns and villages and many thousands of subjects. The 16th century Poland was a "republic of nobles", and it was the nobility's "middle class" that formed the leading component during the later Jagiellonian period and afterwards, but the magnates held the highest state and church offices. At that time szlachta in Poland and Lithuania was ethnically diversified and belonged to various religious denominations. During this period of tolerance such factors had little bearing on one's economic status or career potential. Jealous of their class privilege ("freedoms"), the Renaissance szlachta developed a sense of public service duties, educated their youth, took keen interest in current trends and affairs and traveled widely. While the Golden Age of Polish Culture adopted the western humanism and Renaissance patterns, the style of the nobles beginning in the second half of the century acquired a distinctly eastern flavor. Visiting foreigners often remarked on the splendor of the residencies and consumption-oriented lifestyle of wealthy Polish nobles.
In a situation analogous with that of other European countries, the progressive internal decay of the Polish Church created conditions favorable for the dissemination of the Reformation ideas and currents. For example, there was a chasm between the lower clergy and the nobility-based Church hierarchy, which was quite laicized and preoccupied with temporal issues, such as power and wealth, often corrupt. The middle nobility, which had already been exposed to the Hussite reformist persuasion, increasingly looked at the Church's many privileges with envy and hostility.
The teachings of Martin Luther were accepted most readily in the regions with strong German connections: Silesia, Greater Poland, Pomerania and Prussia. In Danzig (Gdańsk) in 1525 a lower-class Lutheran social uprising took place, bloodily subdued by Sigismund I; after the reckoning he established a representation for the plebeian interests as a segment of the city government. Königsberg and the Duchy of Prussia under Albrecht Hohenzollern became a strong center of Protestant propaganda dissemination affecting all of northern Poland and Lithuania. Sigismund I quickly reacted against the "religious novelties", issuing his first related edict in 1520, banning any promotion of the Lutheran ideology, or even foreign trips to the Lutheran centers. Such attempted (poorly enforced) prohibitions continued until 1543.
Sigismund's son Sigismund II Augustus (Zygmunt II August), a monarch of a much more tolerant attitude, guaranteed the freedom of the Lutheran religion practice in all of Royal Prussia by 1559. Besides Lutheranism, which, within the Polish Crown, ultimately found substantial following mainly in the cities of Royal Prussia and western Greater Poland, the teachings of the persecuted Anabaptists and Unitarians, and in Greater Poland the Czech Brothers, were met, at least among the szlachta, with a more sporadic response.
In Royal Prussia, 41% of the parishes were counted as Lutheran in the second half of the 16th century, but that percentage kept increasing. According to Kasper Cichocki, who wrote in the early 17th century, only remnants of Catholicism were left there in his time. Lutheranism was strongly dominant in Royal Prussia throughout the 17th century, with the exception of Warmia (Ermland).
Around 1570, of the at least 700 Protestant congregations in Poland-Lithuania, over 420 were Calvinist and over 140 Lutheran, with the latter including 30-40 ethnically Polish. Protestants encompassed approximately 1/2 of the magnate class, 1/4 of other nobility and townspeople, and 1/20 of the non-Orthodox peasantry. The bulk of the Polish-speaking population had remained Catholic, but the proportion of Catholics became significantly diminished within the upper social ranks.
Calvinism on the other hand, in mid 16th century gained many followers among both the szlachta and the magnates, especially in Lesser Poland and Lithuania. The Calvinists, who led by Jan Łaski were working on unification of the Protestant churches, proposed the establishment of a Polish national church, under which all Christian denominations, including Eastern Orthodox (very numerous in the Grand Duchy of Lithuania and Ukraine), would be united. After 1555 Sigismund II, who accepted their ideas, sent an envoy to the pope, but the papacy rejected the various Calvinist postulates. Łaski and several other Calvinist scholars published in 1563 the Bible of Brest, a complete Polish Bible translation from the original languages, an undertaking financed by Mikołaj Radziwiłł the Black. After 1563–1565 (the abolishment of state enforcement of the Church jurisdiction), full religious tolerance became the norm. The Polish Catholic Church emerged from this critical period weakened, but not badly damaged (the bulk of the Church property was preserved), which facilitated the later success of Counter-Reformation.
Among the Calvinists, who also included the lower classes and their leaders, ministers of common background, disagreements soon developed, based on different views in the areas of religious and social doctrines. The official split took place in 1562, when two separate churches were officially established, the mainstream Calvinist, and the smaller, more reformist, known as the Polish Brethren or Arians. The adherents of the radical wing of the Polish Brethren promoted, often by way of personal example, the ideas of social justice. Many Arians (Piotr of Goniądz, Jan Niemojewski) were pacifists opposed to private property, serfdom, state authority and military service; through communal living some had implemented the ideas of shared usage of the land and other property. A major Polish Brethren congregation and center of activities was established in 1569 in Raków near Kielce, and lasted until 1638, when Counter-Reformation had it closed. The notable Sandomierz Agreement of 1570, an act of compromise and cooperation among several Polish Protestant denominations, excluded the Arians, whose more moderate, larger faction toward the end of the century gained the upper hand within the movement.
The act of the Warsaw Confederation, which took place during the convocation sejm of 1573, provided guarantees, at least for the nobility, of religious freedom and peace. It gave the Protestant denominations, including the Polish Brethren, formal rights for many decades to come. Uniquely in 16th-century Europe, it turned the Commonwealth, in the words of Cardinal Stanislaus Hosius, a Catholic reformer, into a "safe haven for heretics".
Culture of Polish Renaissance
Golden Age of Polish culture
The Polish "Golden Age", the period of the reigns of Sigismund I and Sigismund II, the last two Jagiellonian kings, or more generally the 16th century, is most often identified with the rise of the culture of Polish Renaissance. The cultural flowering had its material base in the prosperity of the elites, both the landed nobility and urban patriciate at such centers as Cracow and Danzig. As was the case with other European nations, the Renaissance inspiration came in the first place from Italy, a process accelerated to some degree by Sigismund I's marriage to Bona Sforza. Many Poles traveled to Italy to study and to learn its culture. As imitating Italian ways became very trendy (the royal courts of the two kings provided the leadership and example for everybody else), many Italian artists and thinkers were coming to Poland, some settling and working there for many years. While the pioneering Polish humanists, greatly influenced by Erasmus of Rotterdam, accomplished the preliminary assimilation of the antiquity culture, the generation that followed was able to put greater emphasis on the development of native elements, and because of its social diversity, advanced the process of national integration.
Literacy, education and patronage of intellectual endeavors
Beginning in 1473 in Cracow (Kraków), the printing business kept growing. By the turn of the 16th/17th century there were about 20 printing houses within the Commonwealth, 8 in Cracow, the rest mostly in Danzig (Gdańsk), Thorn (Toruń) and Zamość. The Academy of Kraków and Sigismund II possessed well-stocked libraries; smaller collections were increasingly common at noble courts, schools and townspeople's households. Illiteracy levels were falling, as by the end of the 16th century almost every parish ran a school.
The Lubrański Academy, an institution of higher learning, was established in Poznań in 1519. The Reformation resulted in the establishment of a number of gymnasiums, academically oriented secondary schools, some of international renown, as the Protestant denominations wanted to attract supporters by offering high quality education. The Catholic reaction was the creation of Jesuit colleges of comparable quality. The Kraków University in turn responded with humanist program gymnasiums of its own.
The university itself experienced a period of prominence at the turn of the 15th/16th century, when especially the mathematics, astronomy and geography faculties attracted numerous students from abroad. Latin, Greek, Hebrew and their literatures were likewise popular. By the mid 16th century the institution entered a crisis stage, and by the early 17th century regressed into Counter-reformational conformism. The Jesuits took advantage of the infighting and established in 1579 a university college in Vilnius, but their efforts aimed at taking over the Academy of Kraków were unsuccessful. Under the circumstances many elected to pursue their studies abroad.
Zygmunt I Stary, who built the presently existing Wawel Renaissance castle, and his son Sigismund II Augustus, supported intellectual and artistic activities and surrounded themselves with the creative elite. Their patronage example was followed by ecclesiastic and lay feudal lords, and by patricians in major towns.
Polish science reached its culmination in the first half of the 16th century. The medieval point of view was criticized, more rational explanations were attempted. Copernicus' De revolutionibus orbium coelestium, published in Nuremberg in 1543, shook up the traditional value system extended into an understanding of the physical universe, doing away with its Christianity-adopted Ptolemaic anthropocentric model and setting free the explosion of scientific inquiry. Generally the prominent scientists of the period resided in many different regions of the country, and increasingly, the majority were of urban, rather than noble origin.
Nicolaus Copernicus, a son of a Toruń trader from Kraków, made many contributions to science and the arts. His scientific creativity was inspired at the University of Kraków, at the institution's height; he also studied at Italian universities later. Copernicus wrote Latin poetry, developed an economic theory, functioned as a cleric-administrator, political activist in Prussian sejmiks, and led the defense of Olsztyn against the forces of Albrecht Hohenzollern. As an astronomer, he worked on his scientific theory for many years at Frombork, where he died.
Josephus Struthius became famous as a physician and medical researcher. Bernard Wapowski was a pioneer of Polish cartography. Maciej Miechowita, a rector at the Cracow Academy, published in 1517 Tractatus de duabus Sarmatiis, a treatise on the geography of the East, an area in which Polish investigators provided first-hand expertise for the rest of Europe.
Andrzej Frycz Modrzewski was one of the greatest theorists of political thought in Renaissance Europe. His most famous work, On the Improvement of the Commonwealth, was published in Kraków in 1551. Modrzewski criticized the feudal societal relations and proposed broad realistic reforms. He postulated that all social classes should be subjected to the law to the same degree, and wanted to moderate the existing inequities. Modrzewski, an influential and often translated author, was a passionate proponent of peaceful resolution of international conflicts. Bishop Wawrzyniec Goślicki (Goslicius), who wrote and published in 1568 a study entitled De optimo senatore (The Counsellor in the 1598 English translation), was another popular and influential in the West political thinker.
Historian Marcin Kromer wrote De origine et rebus gestis Polonorum (On the origin and deeds of Poles) in 1555 and in 1577 Polonia, a treatise highly regarded in Europe. Marcin Bielski's Chronicle of the Whole World, a universal history, was written ca. 1550. The chronicle of Maciej Stryjkowski (1582) covered the history of Eastern Europe.
Modern Polish literature begins in the 16th century. At that time the Polish language, common to all educated groups, matured and penetrated all areas of public life, including municipal institutions, the legal code, the Church and other official uses, coexisting for a while with Latin. Klemens Janicki, one of the Renaissance Latin language poets, a laureate of a papal distinction, was of peasant origin. Another plebeian author, Biernat of Lublin, wrote his own version of Aesop's fables in Polish, permeated with his socially radical views.
A Literary Polish language breakthrough came under the influence of the Reformation with the writings of Mikołaj Rej. In his Brief Discourse, a satire published in 1543, he defends a serf from a priest and a noble, but in his later works he often celebrates the joys of the peaceful but privileged life of a country gentleman. Rej, whose legacy is his unbashful promotion of the Polish language, left a great variety of literary pieces. Łukasz Górnicki, an author and translator, perfected the Polish prose of the period. His contemporary and friend Jan Kochanowski became one of the greatest Polish poets of all time.
Kochanowski was born in 1530 into a prosperous noble family. In his youth he studied at the universities of Kraków, Königsberg and Padua and traveled extensively in Europe. He worked for a time as a royal secretary, and then settled in the village of Czarnolas, a part of his family inheritance. Kochanowski's multifaceted creative output is remarkable for both the depth of thoughts and feelings that he shares with the reader, and for its beauty and classic perfection of form. Among Kochanowski's best known works are bucolic Frascas (trifles), epic poetry, religious lyrics, drama-tragedy The Dismissal of the Greek Envoys, and the most highly regarded Threnodies or laments, written after the death of his young daughter.
Following the European and Italian in particular musical trends, the Renaissance music was developing in Poland, centered around the royal court patronage and branching from there. Sigismund I kept from 1543 a permanent choir at the Wawel castle, while the Reformation brought large scale group Polish language church singing during the services. Jan of Lublin wrote a comprehensive tablature for the organ and other keyboard instruments. Among the composers, who often permeated their music with national and folk elements, were Wacław of Szamotuły, Mikołaj Gomółka, who wrote music to Kochanowski translated psalms, and Mikołaj Zieleński, who enriched the Polish music by adopting the Venetian School polyphonic style.
Architecture, sculpture and painting
Architecture, sculpture and painting developed also under Italian influence from the beginning of the 16th century. A number of professionals from Tuscany arrived and worked as royal artists in Kraków. Francesco Fiorentino worked on the tomb of Jan Olbracht already from 1502, and then together with Bartolommeo Berrecci and Benedykt from Sandomierz rebuilt the royal castle, which was accomplished between 1507 and 1536. Berrecci also built Sigismund's Chapel at Wawel Cathedral. Polish magnates, Silesian Piast princes in Brzeg, and even Kraków merchants (by the mid 16th century their class economically gained strength nationwide) built or rebuilt their residencies to make them resemble the Wawel Castle. Kraków's Sukiennice and Poznań City Hall are among numerous buildings rebuilt in the Renaissance manner, but Gothic construction continued alongside for a number of decades.
Between 1580 and 1600 Jan Zamoyski commissioned the Venetian architect Bernardo Morando to build the city of Zamość. The town and its fortifications were designed to consistently implement the Renaissance and Mannerism aesthetic paradigms.
Tombstone sculpture, often inside churches, is richly represented on graves of clergy and lay dignitaries and other wealthy individuals. Jan Maria Padovano and Jan Michałowicz of Urzędów count among the prominent artists.
Painted illuminations in Balthasar Behem Codex are of exceptional quality, but draw their inspiration largely from Gothic art. Stanisław Samostrzelnik, a monk in the Cistercian monastery in Mogiła near Kraków, painted miniatures and polychromed wall frescos.
Republic of middle nobility; execution movement
The Polish political system in the 16th century was contested terrain as the middle gentry (szlachta) sought power. Kings Sigismund I the Old and Sigismund II Augustus manipulated political institutions to block the gentry. The kings used their appointment power and influence on the elections to the Sejm. They issued propaganda upholding the royal position and provided financing to favoured leaders of the gentry. Seldom did the kings resort to repression or violence. Compromises were reached so that in the second half of the 16th century—for the only time in Polish history—the "democracy of the gentry" was implemented.
During the reign of Sigismund I, szlachta in the lower chamber of general sejm (from 1493 a bicameral legislative body), initially decidedly outnumbered by their more privileged colleagues from the senate (which is what the appointed for life prelates and barons of the royal council were being called now), acquired a more numerous and fully elected representation. Sigismund however preferred to rule with the help of the magnates, pushing szlachta into the "opposition".
After the Nihil novi act of 1505, a collection of laws known as Łaski's Statutes was published in 1506 and distributed to Polish courts. The legal pronouncements, intended to facilitate the functioning of a uniform and centralized state, with ordinary szlachta privileges strongly protected, were frequently ignored by the kings, beginning with Sigismund I, and the upper nobility or church interests. This situation became the basis for the formation around 1520 of the szlachta's execution movement, for the complete codification and execution, or enforcement, of the laws.
In 1518 Sigismund I married Bona Sforza d'Aragona, a young, strong-minded Italian princess. Bona's sway over the king and the magnates, her efforts to strengthen the monarch's political position, financial situation, and especially the measures she took to advance her personal and dynastic interests, including the forced royal election of the minor Sigismund Augustus in 1529 and his premature coronation in 1530, increased the discontent among szlachta activists.
The opposition middle szlachta movement came up with a constructive reform program during the Kraków sejm of 1538/1539. Among the movement's demands were termination of the kings' practice of alienation of royal domain, giving or selling land estates to great lords at the monarch' discretion, and a ban on concurrent holding of multiple state offices by the same person, both legislated initially in 1504. Sigismund I's unwillingness to move toward the implementation of the reformers' goals negatively affected the country's financial and defensive capabilities.
The relationship with szlachta had only gotten worse during the early years of the reign of Sigismund II Augustus and remained bad until 1562. Sigismund Augustus' secret marriage with Barbara Radziwiłł in 1547, before his accession to the throne, was strongly opposed by his mother Bona and by the magnates of the Crown. Sigismund, who took over the reign after his father's death in 1548, overcame the resistance and had Barbara crowned in 1550; a few months later the new queen died. Bona, estranged from her son returned to Italy in 1556, where she died soon afterwards.
The Sejm, until 1573 summoned by the king at his discretion (for example when he needed funds to wage a war), composed of the two chambers presided over by the monarch, became in the course of the 16th century the main organ of the state power. The reform-minded execution movement had its chance to take on the magnates and the church hierarchy (and take steps to restrain their abuse of power and wealth) when Sigismund Augustus switched sides and lent them his support at the sejm of 1562. During this and several more sessions of parliament, within the next decade or so, the Reformation inspired szlachta was able to push through a variety of reforms, which resulted in a fiscally more sound, better governed, more centralized and territorially unified Polish state. Some of the changes were too modest, other had never become completely implemented (e. g. recovery of the usurped Crown land), but nevertheless for the time being the middle szlachta movement was victorious.
Resources and strategic objectives
Despite the favorable economic development, the military potential of 16th century Poland was modest in relation to the challenges and threats coming from several directions, which included the Ottoman Empire, the Teutonic state, the Habsburgs, and Muscovy. Given the declining military value and willingness of pospolite ruszenie, the bulk of the forces available consisted of professional and mercenary soldiers. Their number and provision depended on szlachta-approved funding (self-imposed taxation and other sources) and tended to be insufficient for any combination of adversaries. The quality of the forces and their command was good, as demonstrated by victories against a seemingly overwhelming enemy. The attainment of strategic objectives was supported by a well-developed service of knowledgeable diplomats and emissaries. Because of the limited resources at the state's disposal, the Jagiellonian Poland had to concentrate on the area most crucial for its security and economic interests, which was the strengthening of Poland's position along the Baltic coast.
Prussia; struggle for Baltic area domination
The Peace of Thorn of 1466 reduced the Teutonic Knights, but brought no lasting solution to the problem they presented for Poland and their state avoided paying the prescribed tribute. The chronically difficult relations had gotten worse after the 1511 election of Albrecht as Grand Master of the Order. Faced with Albrecht's rearmament and hostile alliances, Poland waged a war in 1519; the war ended in 1521, when mediation by Charles V resulted in a truce. As a compromise move Albrecht, persuaded by Martin Luther, initiated a process of secularization of the Order and the establishment of a lay duchy of Prussia, as Poland's dependency, ruled by Albrecht and afterwards by his descendants. The terms of the proposed pact immediately improved Poland's Baltic region situation, and at that time also appeared to protect the country's long-term interests. The treaty was concluded in 1525 in Kraków; the remaining state of the Teutonic Knights (East Prussia centered on Königsberg) was converted into the Protestant (Lutheran) Duchy of Prussia under the King of Poland and the homage act of the new Prussian duke in Kraków followed.
In reality the House of Hohenzollern, of which Albrecht was a member, the ruling family of the Margraviate of Brandenburg, had been actively expanding its territorial influence; for example already in the 16th century in Farther Pomerania and Silesia. Motivated by a current political expediency, Sigismund Augustus in 1563 allowed the Brandenburg elector branch of the Hohenzollerns, excluded under the 1525 agreement, to inherit the Prussian fief rule. The decision, confirmed by the 1569 sejm, made the future union of Prussia with Brandenburg possible. Sigismind II, unlike his successors, was however careful to assert his supremacy. The Polish–Lithuanian Commonwealth, ruled after 1572 by elective kings, was even less able to counteract the growing importance of the dynastically active Hohenzollerns.
In 1568 Sigismund Augustus, who had already embarked on a war fleet enlargement program, established the Maritime Commission. A conflict with the City of Gdańsk (Danzig), which felt that its monopolistic trade position was threatened, ensued. In 1569 Royal Prussia had its legal autonomy largely taken away, and in 1570 Poland's supremacy over Danzig and the Polish King's authority over the Baltic shipping trade were regulated and received statutory recognition (Karnkowski's Statutes).
Wars with Moscow
In the 16th century the Grand Duchy of Moscow continued activities aimed at unifying the old Rus' lands still under Lithuanian rule. The Grand Duchy of Lithuania had insufficient resources to counter Moscow's advances, already having to control the Rus' population within its borders and not being able to count on loyalty of Rus' feudal lords. As a result of the protracted war at the turn of the 15th and 16th centuries, Moscow acquired large tracts of territory east of the Dnieper River. Polish assistance and involvement were increasingly becoming a necessary component of the balance of power in the eastern reaches of the Lithuanian domain.
Under Vasili III Moscow fought a war with Lithuania and Poland between 1512 and 1522, during which in 1514 the Russians took Smolensk. That same year the Polish-Lithuanian rescue expedition fought the victorious Battle of Orsha under Hetman Konstanty Ostrogski and stopped the Duchy of Moscow's further advances. An armistice implemented in 1522 left Smolensk land and Severia in Russian hands. Another round of fighting took place during 1534–1537, when the Polish aid led by Hetman Jan Tarnowski made possible the taking of Gomel and fiercely defeated Starodub. New truce (Lithuania kept only Gomel), stabilization of the border and over two decades of peace followed.
The Jagiellons and the Habsburgs; Ottoman Empire expansion
In 1515, during a congress in Vienna, a dynastic succession arrangement was agreed to between Maximilian I, Holy Roman Emperor and the Jagiellon brothers, Vladislas II of Bohemia and Hungary and Sigismund I of Poland and Lithuania. It was supposed to end the Emperor's support for Poland's enemies, the Teutonic and Russian states, but after the election of Charles V, Maximilian's successor in 1519, the relations with Sigismund had worsened.
The Jagiellon rivalry with the House of Habsburg in central Europe was ultimately resolved to the Habsburgs' advantage. The decisive factor that damaged or weakened the monarchies of the last Jagiellons was the Ottoman Empire's Turkish expansion. Hungary's vulnerability greatly increased after Suleiman the Magnificent took the Belgrade fortress in 1521. To prevent Poland from extending military aid to Hungary, Suleiman had a Tatar-Turkish force raid southeastern Poland–Lithuania in 1524. The Hungarian army was defeated in 1526 at the Battle of Mohács, where the young Louis II Jagiellon, son of Vladislas II, was killed. Subsequently, after a period of internal strife and external intervention, Hungary was partitioned between the Habsburgs and the Ottomans.
The 1526 death of Janusz III of Masovia, the last of the Masovian Piast dukes line (a remnant of the fragmentation period divisions), enabled Sigismund I to finalize the incorporation of Masovia into the Polish Crown in 1529.
From the early 16th century the Pokuttya border region was contested by Poland and Moldavia (see Battle of Obertyn). A peace with Moldavia took effect in 1538 and Pokuttya remained Polish. An "eternal peace" with the Ottoman Empire was negotiated by Poland in 1533 to secure frontier areas. Moldavia had fallen under Turkish domination, but Polish-Lithuanian magnates remained actively involved there. Sigismund II Augustus even claimed "jurisdiction" and in 1569 accepted a formal, short-lived suzerainty over Moldavia.
Livonia; struggle for Baltic area domination
Because of its desire to control Livonian Baltic seaports, especially Riga, and other economic reasons, in the 16th century the Grand Duchy of Lithuania was becoming increasingly interested in extending its territorial rule to Livonia, a country, by the 1550s largely Lutheran, traditionally ruled by the Brothers of the Sword knightly order. This put Poland and Lithuania on a collision course with Moscow and other powers, which had also attempted expansion in that area.
Soon after the 1525 Kraków (Cracow) treaty, Albrecht (Albert) of Hohenzollern, seeking a dominant position for his brother Wilhelm, the Archbishop of Riga, planned a Polish–Lithuanian fief in Livonia. What happened instead was the establishment of a Livonian pro-Polish–Lithuanian party or faction. Internal fighting in Livonia took place when the Grand Master of the Brothers concluded in 1554 a treaty with Moscow, declaring his state's neutrality regarding the Russian–Lithuanian conflict. Supported by Albrecht and the magnates Sigismund II declared a war on the Order. Grand Master Wilhelm von Fürstenberg accepted the Polish–Lithuanian conditions without a fight, and according to the 1557 Poswol treaty, a military alliance obliged the Livonian state to support Lithuania against Moscow.
Other powers aspiring to the Livonian Baltic access responded with partitioning of the Livonian state, which triggered the lengthy Livonian War, fought between 1558 and 1583. Ivan IV of Russia took Dorpat (Tartu) and Narva in 1558, and soon the Danes and Swedes had occupied other parts of the country. To protect the integrity of their country, the Livonians now sought a union with the Polish–Lithuanian state. Gotthard Kettler, the new Grand Master, met in Vilnius (Vilna, Wilno) with Sigismund Augustus in 1561 and declared Livonia a vassal state under the Polish King. The agreement of November 28 called for secularization of the Brothers of the Sword Order and incorporation of the newly established Duchy of Livonia into the Rzeczpospolita ("Republic") as an autonomous entity. Under the Union of Vilnius the Duchy of Courland and Semigallia was also created as a separate fief, to be ruled by Kettler. Sigismund II obliged himself to recover the parts of Livonia lost to Moscow and the Baltic powers, which had led to grueling wars with Russia (1558–1570 and 1577–1582) and heavy struggles having to do also with the fundamental issues of control of the Baltic trade and freedom of navigation.
The Baltic region policies of the last Jagiellon king and his advisors were the most mature of the 16th century Poland's strategic programs. The outcome of the efforts in that area was to a considerable extent successful for the Commonwealth. The conclusion of the above wars took place during the reign of King Stephen Báthory.
Poland and Lithuania in real union under Sigismund II
Sigismund II's childlessness added urgency to the idea of turning the personal union between Poland and the Grand Duchy of Lithuania into a more permanent and tighter relationship; it was also a priority for the execution movement. Lithuania's laws were codified and reforms enacted in 1529, 1557, 1565–1566 and 1588, gradually making its social, legal and economic system similar to that of Poland, with the expanding role of the middle and lower nobility. Fighting wars with Moscow under Ivan IV and the threat perceived from that direction provided additional motivation for the real union for both Poland and Lithuania.
The process of negotiating the actual arrangements turned out to be difficult and lasted from 1563 to 1569, with the Lithuanian magnates, worried about losing their dominant position, being at times uncooperative. It took Sigismunt II's unilateral declaration of the incorporation into the Polish Crown of substantial disputed border regions, including most of Lithuanian Ukraine, to make the Lithuanian magnates rejoin the process, and participate in the swearing of the act of the Union of Lublin on July 1, 1569. Lithuania for the near future was becoming more secure on the eastern front. It's increasingly Polonized nobility made in the coming centuries great contributions to the Commonwealth's culture, but at the cost of Lithuanian national development.
The Lithuanian language survived as a peasant vernacular and also as a written language in religious use, from the publication of the Lithuanian Catechism by Martynas Mažvydas in 1547. The Ruthenian language was and remained in the Grand Duchy's official use even after the Union, until the takeover of Polish.
The Commonwealth: multicultural, magnate dominated
By the Union of Lublin a unified Polish–Lithuanian Commonwealth (Rzeczpospolita) was created, stretching from the Baltic Sea and the Carpathian mountains to present-day Belarus and western and central Ukraine (which earlier had been Kievan Rus' principalities). Within the new federation some degree of formal separateness of Poland and Lithuania was retained (distinct state offices, armies, treasuries and judicial systems), but the union became a multinational entity with a common monarch, parliament, monetary system and foreign-military policy, in which only the nobility enjoyed full citizenship rights. Moreover, the nobility's uppermost stratum was about to assume the dominant role in the Commonwealth, as magnate factions were acquiring the ability to manipulate and control the rest of szlachta to their clique's private advantage. This trend, facilitated further by the liberal settlement and land acquisition consequences of the union, was becoming apparent at the time of, or soon after the 1572 death of Sigismund Augustus, the last monarch of the Jagiellonian dynasty.
One of the most salient characteristics of the newly established Commonwealth was its multiethnicity, and accordingly diversity of religious faiths and denominations. Among the peoples represented were Poles (about 50% or less of the total population), Lithuanians, Latvians, Rus' people (corresponding to today's Belarusians, Ukrainians, Russians or their East Slavic ancestors), Germans, Estonians, Jews, Armenians, Tatars and Czechs, among others, for example smaller West European groups. As for the main social segments in the early 17th century, nearly 70% of the Commonwealth's population were peasants, over 20% residents of towns, and less than 10% nobles and clergy combined. The total population, estimated at 8–10 millions, kept growing dynamically until the middle of the century. The Slavic populations of the eastern lands, Rus' or Ruthenia, were solidly, except for the Polish colonizing nobility (and Polonized elements of local nobility), Eastern Orthodox, which portended future trouble for the Commonwealth.
Poland had become the home to Europe's largest Jewish population, as royal edicts guaranteeing Jewish safety and religious freedom, issued during the 13th century (Bolesław the Pious, Statute of Kalisz of 1264), contrasted with bouts of persecution in Western Europe. This persecution intensified following the Black Death of 1348–1349, when some in the West blamed the outbreak of the plague on the Jews. As scapegoats were sought, the increased Jewish persecution led to pogroms and mass killings in a number of German cities, which caused an exodus of survivors heading east. Much of Poland was spared from the Black Death, and Jewish immigration brought their valuable contributions and abilities to the rising state. The number of Jews in Poland kept increasing throughout the Middle Ages; the population had reached about 30,000 toward the end of the 15th century, and, as refugees escaping further persecution elsewhere kept streaming in, 150,000 in the 16th century. A royal privilege issued in 1532 granted the Jews freedom to trade anywhere within the kingdom. Massacres and expulsions from many German states continued until 1552–1553. By the mid-16th century, 80% of the world's Jews lived and flourished in Poland and in Lithuania; most of western and central Europe was by that time closed to Jews. In Poland–Lithuania the Jews were increasingly finding employment as managers and intermediaries, facilitating the functioning of and collecting revenue in huge magnate-owned land estates, especially in the eastern borderlands, developing into an indispensable mercantile and administrative class. Despite the partial resettlement of Jews in Western Europe following the Thirty Years' War (1618–1648), a great majority of world Jewry had lived in Eastern Europe (in the Commonwealth and in the regions further east and south, where many migrated), until the 1940s.
- History of Poland during the Piast dynasty
- History of Lithuania
- History of the Polish–Lithuanian Commonwealth (1569–1648)
a.^ This is true especially regarding legislative matters and legal framework. Despite the restrictions the nobility imposed on the monarchs, the Polish kings had never become figureheads. In practice they wielded considerable executive power, up to and including the last king, Stanisław August Poniatowski. Some were at times even accused of absolutist tendencies, and it may be for the lack of sufficiently strong personalities or favorable circumstances that none of the kings had succeeded in significant and lasting strengthening of the monarchy.
- Wyrozumski 1986
- Gierowski 1986
- Wyrozumski 1986, pp. 178–180
- Davies 1998, pp. 392, 461–463
- Krzysztof Baczkowski – Dzieje Polski późnośredniowiecznej (1370–1506) (History of Late Medieval Poland (1370–1506)), p. 55; Fogra, Kraków 1999, ISBN 83-85719-40-7
- A Traveller's History of Poland, by John Radzilowski; Northampton, Massachusetts: Interlink Books, 2007, ISBN 1-56656-655-X, p. 63-65
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki. Cambridge: Cambridge University Press, 2nd edition 2006, ISBN 0-521-61857-6, p. 68-69
- Wyrozumski 1986, pp. 180–190
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 41
- Stopka 1999, p. 91
- Wyrozumski 1986, pp. 190–195
- Wyrozumski 1986, pp. 195–198, 201–203
- Wyrozumski 1986, pp. 198–206
- Wyrozumski 1986, pp. 206–207
- Wyrozumski 1986, pp. 207–213
- 'Stopka 1999, p. 86
- "Russian Interaction with Foreign Lands". Strangelove.net. 2007-10-06. Retrieved 2009-09-19.
- "List of Wars of the Crimean Tatars". Zum.de. Retrieved 2009-09-19.
- Wyrozumski 1986, pp. 213–215
- Krzysztof Baczkowski – Dzieje Polski późnośredniowiecznej (1370–1506) (History of Late Medieval Poland (1370–1506)), p. 302
- Wyrozumski 1986, pp. 215–221
- Wyrozumski 1986, pp. 221–225
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 73
- Gierowski 1986, pp. 24–38
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 65, 68
- Gierowski 1986, pp. 38–53
- Gierowski 1986, pp. 53–64
- Wacław Urban, Epizod reformacyjny (The Reformation episode), p.30. Krajowa Agencja Wydawnicza, Kraków 1988, ISBN 83-03-02501-5.
- Various authors, ed. Marek Derwich and Adam Żurek, Monarchia Jagiellonów, 1399–1586 (The Jagiellonian Monarchy: 1399–1586), p. 131-132, Urszula Augustyniak. Wydawnictwo Dolnośląskie, Wrocław 2003, ISBN 83-7384-018-4.
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 104
- Davies 2005, pp. 118
- Gierowski 1986, pp. 67–71
- Gierowski 1986, pp. 71–74
- Gierowski 1986, pp. 74–79
- Stanisław Grzybowski – Dzieje Polski i Litwy (1506-1648) (History of Poland and Lithuania (1506-1648)), p. 206, Fogra, Kraków 2000, ISBN 83-85719-48-2
- Gierowski 1986, pp. 79–84
- Anita J. Prażmowska – A History of Poland, 2004 Palgrave Macmillan, ISBN 0-333-97253-8, p. 84
- Gierowski 1986, pp. 84–85
- Gierowski 1986, pp. 85–88
- Waclaw Uruszczak, "The Implementation of Domestic Policy in Poland under the Last Two Jagellonian Kings, 1506-1572." Parliaments, Estates & Representation (1987) 7#2 pp 133-144. Issn: 0260-6755, not online
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 61
- Gierowski 1986, pp. 92–105
- Basista 1999, p. 104
- Gierowski 1986, pp. 116–118
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 48, 50
- Gierowski 1986, pp. 119–121
- Basista 1999, p. 109
- Gierowski 1986, pp. 104–105
- Gierowski 1986, pp. 121–122
- Andrzej Romanowski, Zaszczuć osobnika Jasienicę (Harass the Jasienica individual). Gazeta Wyborcza newspaper wyborcza.pl, 2010-03-12
- Gierowski 1986, pp. 122–125, 151
- Basista 1999, pp. 109–110
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 58
- Gierowski 1986, pp. 125–130
- Basista 1999, pp. 115, 117
- Gierowski 1986, pp. 105–109
- Davies 1998, p. 228
- Davies 1998, pp. 392
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 81
- Gierowski 1986, pp. 38–39
- Various authors, ed. Marek Derwich, Adam Żurek, Monarchia Jagiellonów (1399–1586) (Jagiellonian monarchy (1399–1586)), p. 160-161, Krzysztof Mikulski. Wydawnictwo Dolnośląskie, Wrocław 2003, ISBN 83-7384-018-4.
- Ilustrowane dzieje Polski (Illustrated History of Poland) by Dariusz Banaszak, Tomasz Biber, Maciej Leszczyński, p. 40. 1996 Podsiedlik-Raniowski i Spółka, ISBN 83-7212-020-X.
- A Traveller's History of Poland, by John Radzilowski, p. 44–45
- Davies 1998, pp. 409–412
- Krzysztof Baczkowski – Dzieje Polski późnośredniowiecznej (1370–1506) (History of Late Medieval Poland (1370–1506)), p. 274–276
- Gierowski 1986, p. 46
- Richard Overy (2010), The Times Complete History of the World, Eights Edition, p. 116–117. London: Times Books. ISBN 978-0-00-788089-8.
- "European Jewish Congress – Poland". Eurojewcong.org. Retrieved 2009-09-19.
- A Traveller's History of Poland, by John Radzilowski, p. 100, 113
- Gierowski 1986, pp. 144–146, 258–261
- A. Janeczek. "Town and country in the Polish Commonwealth, 1350-1650." In: S. R. Epstein. Town and Country in Europe, 1300-1800. Cambridge University Press. 2004. p. 164.
- Gierowski, Józef Andrzej (1986). Historia Polski 1505–1764 (History of Poland 1505–1764). Warszawa: Państwowe Wydawnictwo Naukowe (Polish Scientific Publishers PWN). ISBN 83-01-03732-6.
- Wyrozumski, Jerzy (1986). Historia Polski do roku 1505 (History of Poland until 1505). Warszawa: Państwowe Wydawnictwo Naukowe (Polish Scientific Publishers PWN). ISBN 83-01-03732-6.
- Stopka, Krzysztof (1999). Andrzej Chwalba, ed. Kalendarium dziejów Polski (Chronology of Polish History),. Kraków: Wydawnictwo Literackie. ISBN 83-08-02855-1.
- Basista, Jakub (1999). Andrzej Chwalba, ed. Kalendarium dziejów Polski (Chronology of Polish History),. Kraków: Wydawnictwo Literackie. ISBN 83-08-02855-1.
- Davies, Norman (1998). Europe: A History. New York: HarperPerennial. ISBN 0-06-097468-0.
- Davies, Norman (2005). God's Playground: A History of Poland, Volume I. New York: Columbia University Press. ISBN 978-0-231-12817-9.
- The Cambridge History of Poland (two vols., 1941–1950) online edition vol 1 to 1696
- Butterwick, Richard, ed. The Polish-Lithuanian Monarchy in European Context, c. 1500-1795. Palgrave, 2001. 249 pp. online edition
- Davies, Norman. Heart of Europe: A Short History of Poland. Oxford University Press, 1984.
- Davies, Norman. God's Playground: A History of Poland. 2 vol. Columbia U. Press, 1982.
- Pogonowski, Iwo Cyprian. Poland: A Historical Atlas. Hippocrene, 1987. 321 pp.
- Sanford, George. Historical Dictionary of Poland. Scarecrow Press, 2003. 291 pp.
- Stone, Daniel. The Polish-Lithuanian State, 1386-1795. U. of Washington Press, 2001.
- Zamoyski, Adam. The Polish Way. Hippocrene Books, 1987. 397 pp. | https://en.wikipedia.org/wiki/Jagiellon_Poland |
4.1875 | Orbital elements are the parameters required to uniquely identify a specific orbit. In celestial mechanics these elements are generally considered in classical two-body systems, where a Kepler orbit is used (derived from Newton's laws of motion and Newton's law of universal gravitation). There are many different ways to mathematically describe the same orbit, but certain schemes, each consisting of a set of six parameters, are commonly used in astronomy and orbital mechanics.
A real orbit (and its elements) changes over time due to gravitational perturbations by other objects and the effects of relativity. A Keplerian orbit is merely an idealized, mathematical approximation at a particular time.
The traditional orbital elements are the six Keplerian elements, after Johannes Kepler and his laws of planetary motion.
When viewed from an inertial frame, two orbiting bodies trace out distinct trajectories. Each of these trajectories has its focus at the common center of mass. When viewed from a non-inertial frame centred on one of the bodies, only the trajectory of the opposite body is apparent; Keplerian elements describe these non-inertial trajectories. An orbit has two sets of Keplerian elements depending on which body is used as the point of reference. The reference body is called the primary, the other body is called the secondary. The primary does not necessarily possess more mass than the secondary, and even when the bodies are of equal mass, the orbital elements depend on the choice of the primary.
The main two elements that define the shape and size of the ellipse:
- Eccentricity () - shape of the ellipse, describing how much it is elongated compared to a circle. (not marked in diagram)
- Semimajor axis () - the sum of the periapsis and apoapsis distances divided by two. For circular orbits, the semimajor axis is the distance between the centers of the bodies, not the distance of the bodies from the center of mass.
Two elements define the orientation of the orbital plane in which the ellipse is embedded:
- Inclination - vertical tilt of the ellipse with respect to the reference plane, measured at the ascending node (where the orbit passes upward through the reference plane) (green angle i in diagram).
- Longitude of the ascending node - horizontally orients the ascending node of the ellipse (where the orbit passes upward through the reference plane) with respect to the reference frame's vernal point (green angle Ω in diagram).
- Argument of periapsis defines the orientation of the ellipse in the orbital plane, as an angle measured from the ascending node to the periapsis ( the closest point the satellite object comes to the primary object around which it orbits ). (blue angle in diagram)
- Mean anomaly at epoch () defines the position of the orbiting body along the ellipse at a specific time (the "epoch").
The mean anomaly is a mathematically convenient "angle" which varies linearly with time, but which does not correspond to a real geometric angle. It can be converted into the true anomaly , which does represent the real geometric angle in the plane of the ellipse, between periapsis (closest approach to the central body) and the position of the orbiting object at any given time. Thus, the true anomaly is shown as the red angle in the diagram, and the mean anomaly is not shown.
The angles of inclination, longitude of the ascending node, and argument of periapsis can also be described as the Euler angles defining the orientation of the orbit relative to the reference coordinate system.
Note that non-elliptic trajectories also exist, but are not closed, and are thus not orbits. If the eccentricity is greater than one, the trajectory is a hyperbola. If the eccentricity is equal to one and the angular momentum is zero, the trajectory is radial. If the eccentricity is one and there is angular momentum, the trajectory is a parabola.
This is because the problem contains six degrees of freedom. These correspond to the three spatial dimensions which define position (the x, y, z in a Cartesian coordinate system), plus the velocity in each of these dimensions. These can be described as orbital state vectors, but this is often an inconvenient way to represent an orbit, which is why Keplerian elements are commonly used instead.
Sometimes the epoch is considered a "seventh" orbital parameter, rather than part of the reference frame.
If the epoch is defined to be at the moment when one of the elements is zero, the number of unspecified elements is reduced to five. (The sixth parameter is still necessary to define the orbit; it is merely numerically set to zero by convention or "moved" into the definition of the epoch with respect to real-world clock time.)
Other orbital parameters can be computed from the Keplerian elements such as the period, apoapsis, and periapsis. (When orbiting the Earth, the last two terms are known as the apogee and perigee.) It is common to specify the period instead of the semi-major axis in Keplerian element sets, as each can be computed from the other provided the standard gravitational parameter, GM, is given for the central body.
Using, for example, the "mean anomaly" instead of "mean anomaly at epoch" means that time must be specified as a "seventh" orbital element. Sometimes it is assumed that mean anomaly is zero at the epoch (by choosing the appropriate definition of the epoch), leaving only the five other orbital elements to be specified.
Different sets of elements are used for various astronomical bodies. The eccentricity, e, and either the semi-major axis, a, or the distance of periapsis, q, are used to specify the shape and size of an orbit. The angle of the ascending node, Ω, the inclination, i, and the argument of periapsis, ω, or the longitude of periapsis, ϖ, specify the orientation of the orbit in its plane. Either the longitude at epoch, L0, the mean anomaly at epoch, M0, or the time of perihelion passage, T0, are used to specify a known point in the orbit. The choices made depend whether the vernal equinox or the node are used as the primary reference. The semi-major axis is known if the mean motion and the gravitational mass are known.
It is also quite common to see either the Mean Anomaly (M) or the Mean Longitude (L) expressed directly, without either M0 or L0 as intermediary steps, as a polynomial function with respect to time. This method of expression will consolidate the mean motion (n) into the polynomial as one of the coefficients. The appearance will be that L or M are expressed in a more complicated manner, but we will appear to need one fewer orbital element.
Mean motion can also be obscured behind citations of the orbital period P.
Euler angle transformations
||It has been suggested that this section be merged with Orbital plane. (Discuss) Proposed since December 2013.|
The angles are the Euler angles ( with the notations of that article) characterizing the orientation of the coordinate system
- from the inertial coordinate frame
is in the equatorial plane of the central body. is in the direction of the vernal equinox. is perpendicular to and with defines the reference plane. is perpendicular to the reference plane.
Then, the transformation from the coordinate frame to the frame with the Euler angles is:
The transformation from to Euler angles is:
where signifies the polar argument that can be computed with the standard function atan2(y,x) available in many programming languages.
Under ideal conditions of a perfectly spherical central body and zero perturbations, all orbital elements except the mean anomaly are constants. The mean anomaly changes linearly with time, scaled by the mean motion, . Hence if at any instant the orbital parameters are , then the elements at time is given by
Perturbations and elemental variance
Unperturbed, two-body, Newtonian orbits are always conic sections, so the Keplerian elements define an ellipse, parabola, or hyperbola. Real orbits have perturbations, so a given set of Keplerian elements accurately describes an orbit only at the epoch. Evolution of the orbital elements takes place due to the gravitational pull of bodies other than the primary, the nonsphericity of the primary, atmospheric drag, relativistic effects, radiation pressure, electromagnetic forces, and so on.
Keplerian elements can often be used to produce useful predictions at times near the epoch. Alternatively, real trajectories can be modeled as a sequence of Keplerian orbits that osculate ("kiss" or touch) the real trajectory. They can also be described by the so-called planetary equations, differential equations which come in different forms developed by Lagrange, Gauss, Delaunay, Poincaré, or Hill.
Keplerian elements parameters can be encoded as text in a number of formats. The most common of them is the NASA/NORAD "two-line elements"(TLE) format , originally designed for use with 80-column punched cards, but still in use because it is the most common format, and can be handled easily by all modern data storages as well.
Depending on the application and object orbit, the data derived from TLEs older than 30 days can become unreliable. Orbital positions can be calculated from TLEs through the SGP/SGP4/SDP4/SGP8/SDP8 algorithms.
Example of a two-line element:
1 27651U 03004A 07083.49636287 .00000119 00000-0 30706-4 0 2692 2 27651 039.9951 132.2059 0025931 073.4582 286.9047 14.81909376225249
- Beta Angle
- Geopotential model
- Orbital state vectors
- Proper orbital elements
- Osculating orbit
- For example, with VEC2TLE
- Green, Robin M. (1985). Spherical Astronomy. Cambridge University Press. ISBN 978-0-521-23988-2.
- Danby, J. M. A. (1962). Fundamentals of Celestial Mechanics. Willmann-Bell. ISBN 978-0-943396-20-0.
- Explanatory Supplement to the Astronomical Almanac. 1992. K. P. Seidelmann, Ed., University Science Books, Mill Valley, California.
- SORCE - orbit data at Heavens-Above.com
|Wikibooks has a book on the topic of: Astrodynamics/Classical Orbit Elements|
- Keplerian Elements tutorial
- Orbits Tutorial
- Spacetrack Report No. 3, a really serious treatment of orbital elements from NORAD (in pdf format)
- Celestrak Two-Line Elements FAQ
- The JPL HORIZONS online ephemeris. Also furnishes orbital elements for a large number of solar system objects.
- NASA Planetary Satellite Mean Orbital Parameters.
- Introduction to exporting JPL planetary and lunar ephemerides
- State vectors: VEC2TLE Access to VEC2TLE software | https://en.wikipedia.org/wiki/Orbital_elements |
4 | Plant Biology/Water Relations is one of a series of interactive web-based lessons designed to give introductory undergraduate biology students opportunities to connect biology concepts. Each lesson is a series of screens that breaks the topic down into simple steps and then illustrates the connections between the steps to present the completed concept or process. Students frequently have difficulty understanding why diffusion and osmosis occur and how water moves from the roots to the leaves in plants. This site explains the process in a step-by-step manner. It can be used as a supplement to the lecture to allow students to review the topic at their own pace and as many times as desired. This specific lesson topic covers the concept of water potential and how to solve the water potential equation. A very good help screen is provided to help students use the lessons. The larger site containing the entire series will be very useful at the introductory level
Type of Material:
This site could be used in many ways. 1. As the basis of a classroom lecture presentation. 2. As an out-of-class assignment before the topic is covered in class. 3. As a study tool for students after topic is presented in class.
Use of a current web browser will be required. Macromedia Flash Player 6 plug-in is required.
Identify Major Learning Goals:
The major goal of this lesson is to help students understand how water moves from the soil to the tops of plants. More specifically, it explains the water potential and how to solve the water potential equation. This lesson topic contains specific learning objectives.
Target Student Population:
High school (AP level) through college.
Prerequisite Knowledge or Skills:
Students will need to have a basic understanding of the kinetic gas theory as it applies to diffusion and osmosis.
Evaluation and Observation
A detailed step by step explanation of how water moves from the soil to a plant leaf.
Very clear animations illustrate both the math and cellular processes involved.
An excellent illustration of the power of osmosis and surface tension.
Site is highly interactive with numerous answers to be filled in by studentself grading.
The exercise is quantitativestudents have to use the water potential equation to determine what will happen in a variety of circumstances.
Questions asked throughout lessons help student get feedback on understanding of concepts/process.
Animations clearly connect the different parts of each process into a coherent whole.
Application of concepts in the third section also illustrate reconstruction of damaged ecosystems.
Not a concept covered in this detail in most introductory biology courses. May be best suited for a botany course. Parts could be used in an intro course.
The first unit on the equations used to explain the movement of water is a bit dry - essentially an on-line lecture.
Potential Effectiveness as a Teaching Tool
A thorough explanation of a conceptually difficult concept for students.
Very interactive, with challenging questions at every step to ensure that students understand what is happening.
Could easily adapt this as an assignment, homework problem, or as practice for an exam.
Clearly demonstrates relationships between elements of each concept.
Quantitative aspects of the lessons are particularly good for students.
The site can be used in several waysas a direct teaching tool in a distance learning course,
as a lecture outline, as a review and study tool for students after topic covered in class.
Some of the questions asked will be challenging for introductory students.
Completion of plans for links to assessments and image/animation data bases will greatly enhance the usefulness of the site.
A bit technical at points in understanding and applying water potentials. Students can figure it out, but it will take them a while.
Ease of Use for Both Students and Faculty
All links work smoothly and quickly.
Animations are high quality.
Well organized and easy to navigate.
Instructions clear, especially when manipulating components and entering animations.
Instructors manual available; summarizes the contents of each of the lesson topics.
Glossary available for selected terms.
An overall site map may help if students want to go back to the first unit and review the equations.
Other Issues and Comments:
This series of lessons has outstanding potential for use by faculty and students everywhere. The concepts are broken down to simple parts and then reassembled by an interactive process and animations into a whole. This lesson clearly explains water potential and how to use the water potential equation. | https://www.merlot.org/merlot/viewCompositeReview.htm?id=159858 |
4.34375 | The habitat of deep-water corals, also known as cold-water corals, extends to deeper, darker parts of the oceans than tropical corals, ranging from near the surface to the abyss, beyond 2,000 metres (6,600 ft) where water temperatures may be as cold as 4 °C (39 °F). Deep-water corals belong to the Phylum Cnidaria and are most often stony corals, but also include black and horny corals and soft corals including the Gorgonians (sea fans). Like tropical corals, they provide habitat to other species, but deep-water corals do not require zooxanthellae to survive.
While there are nearly as many species of deep-water corals as shallow-water species, only a few deep-water species develop traditional reefs. Instead, they form aggregations called patches, banks, bioherms, massifs, thickets or groves. These aggregations are often referred to as "reefs," but differ structurally and functionally. Deep sea reefs are sometimes referred to as "mounds," which more accurately describes the large calcium carbonate skeleton that is left behind as a reef grows and corals below die off, rather than the living habitat and refuge that deep sea corals provide for fish and invertebrates. Mounds may or may not contain living deep sea reefs.
Submarine communications cables and fishing methods such as bottom trawling tend to break corals apart and destroy reefs. The deep-water habitat is designated as a United Kingdom Biodiversity Action Plan habitat.
Discovery and study
Deep-water corals are enigmatic because they construct their reefs in deep, dark, cool waters at high latitudes, such as Norway's Continental Shelf. They were first discovered by fishermen about 250 years ago, which garnered interest from scientists. Early scientists were unsure how the reefs sustained life in the seemingly barren and dark conditions of the northerly latitudes. It was not until modern times, when manned mini-submarines first reached sufficient depth, that scientists began to understand these organisms. Pioneering work by Wilson (1979) shed light on a colony on the Porcupine Bank, off Ireland. The first ever live video of a large deep-water coral reef was obtained in July, 1982, when Statoil surveyed a 15 metres (49 ft) tall and 50 metres (160 ft) wide reef perched at 280 metres (920 ft) water depth near Fugløy Island, north of the Polar Circle, off northern Norway.
During their survey of the Fugløy reef, Hovland and Mortensen also found seabed pockmark craters near the reef. Since then, hundreds of large deep-water coral reefs have been mapped and studied. About 60 percent of the reefs occur next to or inside seabed pockmarks. Because these craters are formed by the expulsion of liquids and gases (including methane), several scientists hypothesize that there may be a link between the existence of the deep-water coral reefs and nutrients seepage (light hydrocarbons, such as methane, ethane, and propane) through the seafloor. This hypothesis is called the 'hydraulic theory' for deep-water coral reefs.
Lophelia communities support diverse marine life, such as sponges, polychaete worms, mollusks, crustaceans, brittle stars, starfish, sea urchins, bryozoans, sea spiders, fish and many other vertebrate and invertebrate species.
The first international symposium for deep-water corals took place in Halifax, Canada in 2000. The symposium considered all aspects of deep-water corals, including protection methods.
In June 2009, Living Oceans Society led the Finding Coral Expedition on Canada’s Pacific coast in search of deep sea corals. Using one person submarines, a team of international scientists made 30 dives to depths of over 500 metres (1,600 ft) and saw giant coral forests, darting schools of fish, and a seafloor carpeted in brittle stars. During expedition, scientists identified 16 species of corals. This research trip was the culmination of five years of work to secure protection from the Canadian Government for these slow-growing and long-lived animals, which provide critical habitat for fish and other marine creatures.
Corals are animals in the Phylum Cnidaria and the class Anthozoa. Anthozoa is broken down into two subclasses Octocorals (Alcyonaria) and Hexacorals (Zoantharia). Octocorals are soft corals such as sea pens. Hexacorals include sea anemones and hard bodied corals. Octocorals contain eight body extensions while Hexacorals have six. Most deep-water corals are stony corals.
Deep-water corals are widely distributed within the earth’s oceans, with large reefs/beds in the far North and far South Atlantic, as well as in the tropics in places such as the Florida coast. In the north Atlantic, the principal coral species that contribute to reef formation are Lophelia pertusa, Oculina varicosa, Madrepora oculata, Desmophyllum cristagalli, Enallopsammia rostrata, Solenosmilia variabilis, and Goniocorella dumosa. Four genera (Lophelia, Desmophyllum, Solenosmilia, and Goniocorella) constitute most deep-water coral banks at depths of 400–700 metres (1,300–2,300 ft).
Madrepora oculata occurs as deep as 2,020 metres (6,630 ft) and is one of a dozen species that occur globally and in all oceans, including the Subantarctic (Cairns, 1982). Colonies of Enallopsammia contribute to the framework of deep-water coral banks found at depths of 600 to 800 metres (2,000–2,600 ft) in the Straits of Florida (Cairns and Stanley, 1982).
Lophelia pertusa distribution
The world's largest known deep-water Lophelia coral complex is the Røst Reef. It lies between 300 and 400 metres (980 and 1,310 ft) deep, west of Røst island in the Lofoten archipelago, in Norway, inside the Arctic Circle. Discovered during a routine survey in May 2002, the reef is still largely intact. It is approximately 40 kilometres (25 mi) long by 3 kilometres (1.9 mi) wide.
Some 500 kilometres (310 mi) further south is the Sula Reef, located on the Sula Ridge, west of Trondheim on the mid-Norwegian Shelf, at 200–300 metres (660–980 ft). It is 13 kilometres (8.1 mi) long, 700 metres (2,300 ft) wide, and up to 700 metres (2,300 ft) high, an area one-tenth the size of the 100 square kilometres (39 sq mi) Røst Reef.
Discovered and mapped in 2002, Norway's Tisler Reef lies in the Skagerrak on the submarine border between Norway and Sweden at a depth of 90–120 metres (300–390 ft) and covers an area of 2 by 0.2 kilometres (1.24 mi × 0.12 mi). It is estimated to be 8600–8700 years old. The Tisler Reef contains the world’s only known yellow L. pertusa. Elsewhere in the northeastern Atlantic, Lophelia is found around the Faroe Islands, an island group between the Norwegian Sea and the Northeast Atlantic Ocean. At depths from 200 to 500 metres (660 to 1,640 ft), L. pertusa is chiefly on the Rockall Bank and on the shelf break north and west of Scotland. The Porcupine Seabight, the southern end of the Rockall Bank, and the shelf to the northwest of Donegal all exhibit large, mound-like Lophelia structures. One of them, the Therese Mound, is particularly noted for its Lophelia pertusa and Madrepora oculata colonies. Lophelia reefs are also found along the U.S. East Coast at depths of 500–850 metres (1,640–2,790 ft) along the base of the Florida-Hatteras slope. South of Cape Lookout, NC, rising from the flat sea bed of the Blake Plateau, is a band of ridges capped with thickets of Lophelia. These are the northernmost East Coast Lophelia pertusa growths. The coral mounds and ridges here rise as much as 150 metres (490 ft) from the plateau plain. These Lophelia communities lie in unprotected areas of potential oil and gas exploration and cable-laying operations, rendering them vulnerable to future threats.
Lophelia exist around the Bay of Biscay, the Canary Islands, Portugal, Madeira, the Azores, and the western basin of the Mediterranean Sea.
Among the most researched deep-water coral areas in the United Kingdom are the Darwin Mounds. Atlantic Frontier Environmental Network (AFEN) discovered them in 1998 while conducting large-scale regional sea floor surveys north of Scotland. They discovered two areas of hundreds of sand and deep-water coral mounds at depths of about 1,000 metres (3,300 ft) in the northeast corner of the Rockall Trough, approximately 185 kilometres (115 mi) northwest of the northwest tip of Scotland. Named after the research vessel Charles Darwin, the Darwin Mounds have been extensively mapped using low-frequency side-scan sonar. They cover an area of approximately 100 square kilometres (39 sq mi) and consist of two main fields—the Darwin Mounds East, with about 75 mounds, and the Darwin Mounds West, with about 150 mounds. Other mounds are scattered in adjacent areas. Each mound is about 100 metres (330 ft) in diameter and 5 metres (16 ft) high. Lophelia corals and coral rubble cover the mound tops, attracting other marine life. The mounds look like 'sand volcanoes', each with a 'tail', up to several hundred meters long, all oriented downstream. Large congregations of Xenophyophores (Syringammina fragilissima) which are giant unicellular organisms that can grow up to 25 centimetres (9.8 in) in diameter characterize the tails and mounds. Scientists are uncertain why these organisms congregate here. The Darwin Mounds Lophelia grow on sand rather than hard substrate, unique to this area. Lophelia corals exist in Irish waters as well.
Oculina varicosa distribution
Oculina varicosa is a branching ivory coral that forms giant but slow-growing, bushy thickets on pinnacles up to 30 metres (98 ft) in height. The Oculina Banks, so named because they consist mostly of Oculina varicosa, exist in 50–100 metres (160–330 ft) of water along the continental shelf edge about 26–50 miles (42–80 km) off of Florida's central east coast.
Discovered in 1975 by scientists from the Harbor Branch Oceanographic Institution conducting surveys of the continental shelf, Oculina thickets grow on a series of pinnacles and ridges extending from Fort Pierce to Daytona, Florida Like the Lophelia thickets, the Oculina Banks host a wide array of macroinvertebrates and fishes. They are significant spawning grounds for commercially important food species including gag, scamp, red grouper, speckled hind, black sea bass, red porgy, rock shrimp, and calico scallop.
Growth and reproduction
Most corals must attach to a hard surface in order to begin growing but sea fans can also live on soft sediments. They are often found growing along bathymetric highs such as seamounts, ridges, pinnacles and mounds, on hard surfaces. Corals are sedentary, so they must live near nutrient-rich water currents. Deep-water corals feed on zooplankton and rely on ocean currents to bring food. The currents also aid in cleaning the corals.
Deep-water corals grow more slowly than tropical corals because there are no zooxanthellae to feed them. Lophelia has a linear polyp extension of about 10 millimetres (0.39 in) per year. By contrast, branching shallow-water corals, such as Acropora, may exceed 10–20 cm/yr. Reef structure growth estimates are about 1 millimetre (0.039 in) per year. Scientists have also found Lophelia colonies on oil installations in the North Sea. Using coral age-dating methods, scientists have estimated that some living deep-water corals date back at least 10,000 years.
Coral can reproduce sexually or asexually. In asexual reproduction (budding) a polyp divides in two genetically identical pieces. Sexual reproduction requires that a sperm fertilize an egg which grows into larva. Currents then disperse the larvae. Growth begins when the larvae attach to a solid substrate. Old/dead coral provides an excellent substrate for this growth, creating ever higher mounds of coral. As new growth surrounds the original, the new coral intercepts both water flow and accompanying nutrients, weakening and eventually killing the older organisms.
Individual Lophelia pertusa colonies are entirely either female or male.
Deep-water coral colonies range in size from small and solitary to large, branching tree-like structures. Larger colonies support many life forms, while nearby areas have much less. The gorgonian, Paragorgia arborea, may grow beyond three meters. However, little is known of their basic biology, including how they feed or their methods and timing of reproduction.
Deep sea corals together with other habitat-forming organisms hosts a rich fauna of associated organisms. Lophelia reefs can host up to 1,300 species of fish and invertebrates. Various fish aggregate on deep sea reefs. Deep sea corals, sponges and other habitat-forming animals provide protection from currents and predators, nurseries for young fish, and feeding, breeding and spawning areas for numerous fish and shellfish species. Rockfish, Atka mackerel, walleye pollock, Pacific cod, Pacific halibut, sablefish, flatfish, crabs, and other economically important species in the North Pacific inhabit these areas. Eighty-three percent of the rockfish found in one study were associated with red tree coral. Flatfish, walleye pollock and Pacific cod appear to be more commonly caught around soft corals. Dense schools of female redfish heavy with young have been observed on Lophelia reefs off Norway, suggesting the reefs are breeding or nursery areas for some species. Oculina reefs are important spawning habitat for several grouper species, as well as other fishes.
The primary human impact on deep-water corals is from deep-water trawling. Trawlers drag nets across the ocean floor, disturbing sediments, breaking and destroying deep-water corals. Another harmful method is long line fishing.
Oil and gas exploration also damage deep-water coral.
Deep-water corals grow slowly, so recovery takes much longer than in shallow waters where nutrients and food-providing zooxanthellae are far more abundant.
In a study during 2001 to 2003, a study of a reef of Lophelia pertusa in the Atlantic off Canada found that the corals were often broken in unnatural ways. And the ocean floor displayed scars and overturned boulders from trawling.
In addition to these managed pressures, deep water coral reefs are also vulnerable to unmanaged pressures (e.g. ocean acidification) and in order to protect these habitats in the long-term methods which assess the relative risks of different pressures are being promoted.
Bottom trawling and natural causes like bioerosion and episodic die-offs have reduced much of Florida's Oculina Banks to rubble, drastically reducing a once-substantial fishery by destroying spawning grounds.
In 1980, Harbor Branch Oceanographic Institution scientists called for protective measures. In 1984, the South Atlantic Fishery Management Council (SAFMC) designated a 315 square kilometres (122 sq mi) area as a Habitat Area of Particular Concern. In 1994, an area called the Experimental Oculina Research Reserve was completely closed to bottom fishing. In 1996, the SAFMC prohibited fishing vessels from dropping anchors, grapples, or attached chains there. In 1998, the council also designated the reserve as an Essential Fish Habitat. In 2000, the deep-water Oculina Marine Protected Area was extended to 1,029 square kilometres (397 sq mi). Scientists recently deployed concrete reef balls in an attempt to provide habitat for fish and coral.
Sula and Røst
Scientists estimate that trawling has damaged or destroyed 30 to 50 percent of the Norwegian shelf coral area. The International Council for the Exploration of the Sea, the European Commission’s main scientific advisor on fisheries and environmental issues in the northeast Atlantic, recommend mapping and closing Europe’s deep corals to fishing trawlers.
In 1999, the Norwegian Ministry of Fisheries closed an area of 1,000 square kilometres (390 sq mi) at Sula, including the large reef, to bottom trawling. In 2000, an additional area closed, covering about 600 square kilometres (230 sq mi). An area of about 300 square kilometres (120 sq mi) enclosing the Røst Reef, closed in 2002.
- "Deep Water Corals". Retrieved August 2009.
- Tasker, M. (2007). "Action plan for Lophelia pertusa reefs". United Kingdom Biodiversity Action Plan. Joint Nature Conservation Committee. Retrieved 2009-08-06.
- Gunnerus, Johan Ernst (1768). Om Nogle Norske Coraller.
- Wilson, J.B. (1979). "Biogenic carbonate sediments on the Scottish continental shelf and on Rockall Bank". Marine Geology (33): M85–M93. doi:10.1016/0025-3227(79)90076-8.
- Hovland, Martin (2008). Deep-water coral reefs: Unique Biodiversity hotspots. Chichester, UK: Praxis Publishing (Springer). p. 278.
- Mortensen, P.B., Hovland, M.T., Fosså, J.H. and Furevik, D.M. (2001). "Distribution, abundance and size of Lophelia pertusa coral reefs in mid-Norway in relation to seabed characteristics". Journal Marine Biological Association U.K. 81 (4): 581–597. doi:10.1017/S002531540100426X.
- LEWIS H KING and BRIAN MacLEAN (October 1970). "Pockmarks on the Scotian Shelf". GSA Bulletin (Geological Society of America) 81 (10): 3141–3148. doi:10.1130/0016-7606(1970)81[3141:POTSS]2.0.CO;2. ISSN 0016-7606.
- Judd, A. and Hovland, M. (2007). Seabed Fluid Flow. Impact on Geology, Biology, and the Marine Environment. Cambridge University Press.
- Hovland, M. and Thomsen, E. (1997). "Deep-water corals - are they hydrocarbon seep related?". Marine Geology 137: 159–164. doi:10.1016/s0025-3227(96)00086-2.
- Hovland and Risk, 2003
- "Finding Coral Expedition". Living Oceans.
- McKenna, S.A., Lash, J., Morgan, L., Reuscher, M., Shirley, T., Workman, G., Driscoll, J., Robb, C., Hangaard, D. (2009). "Cruise Report for the Finding Coral Expedition" (PDF).
- Cairns, S. and G. Stanley (1982). "Ahermatypic coral banks: Living and fossil counterparts". Proceedings of the Fourth International Coral Reef Symposium, Manila (1981) 1: 611–618.
- Bell, N. and J. Smith (December 1999). "Coral growing on North Sea oil rigs". Nature 402 (6762): 601–2. doi:10.1038/45127. PMID 10604464.
- Bellona Foundation (2001). "Coral reefs in Norwegian Waters".
- Guihen, D., White, M., and Lundälv, T. (2012). Temperature shocks and ecological implications at a cold-water coral reef. Marine Biodiversity Records 5: 1-10.
- Wisshak, M. and Ruggeberg, A. (2006). Colonisation and bioerosion of experimental substrates by benthic foraminiferans from euphotic to aphotic depths (Kosterfjord, SW Sweden). Facies 52: 1–17.
- Tyler-Walters, H. (2003). "Lophelia reefs". Plymouth, England: Marine Life Information Network: Biology and Sensitivity Key Information Sub-programme.
- Sulak, K. and S. Ross (2001). "A profile of the Lophelia reefs".
- Fosså, Jan Helge. "Coral reefs in the North Atlantic?". Retrieved September 18, 2009.
- Rogers, A.D. (1999). "The biology of Lophelia pertusa (Linnaeus 1758) and other deep-water reef-forming corals and impacts from human activities". International Review of Hydrobiology 84: 315–406. doi:10.1002/iroh.199900032.
- Avent, R.M., King, M.E. and Gore, R.M. (1977). "Topographic and faunal studies of shelf-edge prominences off the central eastern Florida coast". Revue ges.Hydrobiol 62: 185–208. doi:10.1002/iroh.1977.3510620201.
- Reed, J.K. (1981). W.J. Richards, ed. "In situ growth rates of the scleractinian coral Oculina varicosa occurring with zooxanthellae on 6-m reefs and without on 80-m banks". Proceedings of Marine Recreational Fisheries Symposium: 201–206.
- Reed, J.K. (2002). "Comparison of deep-water coral reefs and lithoherms off southeastern U.S.A". Hydrobiologia 471: 43–55. doi:10.1023/A:1016588901551.
- C. Koenig, J. Reid, K. Scanlon, F. Coleman. "Studies in the Experimental Oculina Research Reserve off the Atlantic Coast of Florida". Retrieved September 18, 2009.[dead link]
- Fossa, J.H., P.B. Mortensen, and D.M. Furevic (2002). "The deep-water coral Lophelia pertusa in Norwegian waters: distribution and fishery impacts". Hydrobiologia 417: 1–12. doi:10.1023/a:1016504430684.
- Mayer, T. (2001). 2000 Years Under the Sea.
- Watling, L. (2001). "Deep sea coral".
- Buhl-Mortensen, Lene; Vanreusel, Ann; Gooday, Andrew J.; Levin, Lisa A.; Priede, Imants G.; Buhl-Mortensen, Pål; Gheerardyn, Hendrik; King, Nicola J.; Raes, Maarten (2010). "Biological structures as a source of habitat heterogeneity and biodiversity on the deep ocean margins". Marine Ecology 31: 21–50. doi:10.1111/j.1439-0485.2010.00359.x.
- E. L. Jackson. "Future-proofing marine protected area networks for cold water coral reefs". oxfordjournals.org.
|Wikimedia Commons has media related to Deep sea coral.|
- Deep-sea Corals overview on the Smithsonian Ocean Portal
- Lophelia.org, a website devoted to the cold-water coral habitats from Heriot-Watt University in Edinburgh, Scotland
- Deep Sea Corals: Out of Sight, But No Longer Out of Mind report on deep sea corals around the world from Oceana
- Deep-Sea Corals at the NOAA Habitat Conservation Program
- Deep-sea Corals at the Woods Hole Oceanographic Institution | https://en.wikipedia.org/wiki/Deep_water_coral |
4.125 | EDU501001VA016-1128-001 Learning Theories (K-12)
Instructor: Kelly Walton
November 3, 2012
Describe a learning outcome and a radical behaviorist approach to achieving that outcome
“Learning outcomes are statements that specify what learners will know or be able to do as a result of a learning activity. Outcomes are usually expressed as knowledge, skills, or attitudes. Learning outcomes should flow from a needs assessment. The needs assessment should determine the gap between an existing condition and a desired condition. Learning outcomes are statements which described a desired condition – that is, the knowledge, skills, or attitudes needed to fulfill the need.” (ARCHIVED: Writing Learning Outcomes ) Upon completing this assignment, students will be able to provide accurate supplies/materials that tell the story of Genesis 6: 12- 20 through a Noah’s Ark ship building presentation. This learning outcome can be achieved through a radical behavior. “The behaviorist view in terms of teaching includes highly-structured lesson plans and is essentially teacher led. Learning comes in the form of operant and classical conditioning, which are both heavily weighted on praise, punishment and consequences. The first stage of any teaching is imitation, so the teacher is very much a role model and didacticism is sometimes applied.” (Psycho4Stats, 2012) In an effort to achieve the desired outcome, The teacher will first have students leisurely read Genesis 6: 12- 20 in the classroom as a class as selected times. The following day, students will be asked to participate in an answer/questions session, where each student must have 2 questions prepared for teacher and or fellow students to answer, the anticpated time for this should not exceed two class periods. The students will then be given a worksheet to take home and work on independently. The teacher will then give a “pop” quiz to test the student’s knowledge. This will be graded and will be returned to the students for their review. One major aspect of teaching this lesson would be that the students will watch a 2 hour movie/documentary on Genesis 6: 12- 20. The students will be given a notepad, highlighter, and pens/pencils to use to take notes on any information given that they feel will be beneficial to the Noah’s Ark presentationn but the student is not to have less than 20 facts from the movie. The teacher will make the objectives along with the rules very clear and easy to understand. Thete will be posted deadlines, assignements to be completed, and the possible points. There will also be a list of penalties for late submission as well as incomeplete assignemnts. As the teaching continues, students will ask questions be given directions and will be kept on task by a project syllabus. “In the context of learning, the Behaviourist model for learning is teacher-directed, pedagogic and concrete. It is all about “do as I say.” This involves the lower levels of Bloom’s Taxonomy. The more gifted learner who is at the top of the learning pyramid might not benefit from a Behaviourist-dominated lesson.” (Shirley, 2009) Most of the students will be able to adapt to this theory of learngin and perform at or above level throughout the duration of the lesson. Critique (explain what you would change and why) the radical behaviorist approach in light of cognitive information processing theory.
“Radical Behaviorism is similar to Cognitive Information Processing in that both theories separate learning from the will of the learner. Both assume an external locus of control. Radical Behaviorism attributes learning and change to environmental influence. Cognitive Information Processing agrees that all knowledge is acquired through sensory experience, which is directly affected by one’s environment.” (Comparative Organizer: Learning Theories... | http://www.studymode.com/essays/Radical-Behaviorists-1199420.html |
4.15625 | Asthma is a chronic disease that involves inflammation of airways in the lungs. There is no cure, but it can be treated and controlled. During an asthma attack three basic things occur in the lungs - the inside of the airways swell, airways muscle spasms occur and mucous builds up. Asthma affects all age groups from infants to senior adults. There are many different theories on why asthma occurs, but no definite explanation.
Asthma is characterized by excessive sensitivity of the lungs to various stimuli. Asthma episodes can be triggered by a variety of factors including pollen, cigarette smoke, mold, animals, cockroaches, dust, weather, exercise, respiratory infections and colds, emotions, strong smells, etc. Over 80% of those with asthma also have allergies. Each person reacts differently to the factors that may trigger asthma.
Common symptoms include:
Treatment for asthma includes avoiding the factors that can start an attack and taking medication to control the inflammation in the airways. The treatment will depend on the severity and frequency of the symptoms. To deal with childhood asthma, the doctor may prescribe two types of medicines:
Approximately 18% of the students in the Independence School District have asthma, and it is the most common reason children miss school. Symptoms vary per person, but some common symptoms include a cough with or without mucus, wheezing, difficulty breathing, and/or chest tightness. If you are experiencing any of these symptoms you should see your physician.
If you would like more information about asthma, contact Shawnna Jackson at (816) 325-7188 or [email protected].
This website was made possible through funding from the Health Care Foundation of Greater Kansas City.
The Independence Health Department takes an active role in working with children with asthma. Children in the Independence School District are presented the American Lung Association's Open Airways program. This program helps the students understand their asthma and learn to take charge of their asthma. Classes are also presented to adults, bus drivers, childcare workers, teachers, after-school workers and caretakers.
The American Lung Association’s Open Airways For Schools is a school-based curriculum that educates and empowers children through a fun and interactive approach to asthma self-management. It teaches children with asthma ages 8-11 how to detect the warning signs of asthma, avoid their triggers and make decisions about their health. Children who complete the Open Airways for Schools program should be able to:
Watch this video from an Open Airways For Schools facilitator on why the program is so important to young asthma sufferers.
Open Airways For Schools was developed over a decade ago by researchers at Columbia University, in collaboration with the American Lung Association. The decision was made to design the program for delivery in schools because that is the surest way to reach all children, regardless of their family situation or access to health care. Children who completed the program took more steps to manage their asthma, improved their school performance, and had fewer and less severe asthma episodes. Parents of children participating in Open Airways For Schools reported taking more steps to help manage their children’s asthma. And the school environment became more supportive: children without asthma were more willing to help children with asthma, and children with asthma were able to give support to one another.
The Open Airways For Schools curriculum consists of six 40-minute group lessons for children with asthma held during the school day. The curriculum incorporates an interactive teaching approach – using group discussion, stories, games and role play – to promote students’ active involvement in the learning process. Topics covered include basic information about asthma, recognizing and managing asthma symptoms, using medication, avoiding asthma triggers, getting enough exercise, and doing well at school.
Open Airways For Schools classes are led by trained instructors, who might be the school nurse or other school personnel, parents, community volunteers, or anyone with an asthma background that has an interest in working with children.
The Open Airways For Schools classroom kits contain easy-to-use teaching materials including a detailed curriculum guide, posters and activity handouts. Each lesson also includes materials for the children to take home and share with their parents. All curriculum materials are available in English and Spanish.
Asthma is one of the most common chronic disorders in childhood, currently affecting an estimated 7.1 million children under 18 years; of which 4.1 million suffered from an asthma attack or episode in 2011. Asthma is a reversible obstructive lung disease, caused by increased reaction of the airways to various stimuli. It is a chronic inflammatory condition with acute exacerbations.Determining whether a child has asthma can be difficult.
Secondhand smoke can cause serious harm to children. An estimated 400,000 to one million children with asthma have their condition worsened by exposure to secondhand smoke.
Many babies who wheeze with viral respiratory illnesses will stop wheezing as they grow older. If your child has atopic dermatitis (eczema), allergies or if there is smoking in the home or a strong family history of allergies or asthma, there is a greater chance that asthma symptoms will persist.
Not yet. However, for most children and adults, asthma can be controlled throughout life with appropriate diagnosis, education and treatment.
Once a child's asthma is controlled, (usually with the help of proper medications) exercise should become part of his or her daily activities. Children with asthma certainly can and do excel in athletics. Many Olympic athletes have asthma.
You, your family, physician and school personnel can work together to prevent and/or control asthma. Share your child's asthma management plan with the school nurse and any coaches who oversee your child. With the approval of physicians and parents, school-age children with asthma should be allowed to carry metered-dose inhalers with them and use them as appropriate.
The most important part of managing asthma is for you and your child to be very knowledgeable about how and when asthma causes problems and how to use medications.
Millions of teenagers around the world have asthma – you’re not alone. You know best how you feel about having asthma. These tips may help you be in control of your asthma and can make managing asthma as a teen a bit easier:
Many people develop asthma in childhood. However, asthma symptoms can appear at any time in life. Individuals who develop asthma as adults are said to have adult onset asthma. It is possible to first develop asthma at age 50, 60 or even later in life.
Adult onset asthma may or may not be caused by allergies. Some individuals who had allergies as children or young adults with no asthma symptoms could develop asthma as older adults. Other times, adults become sensitized to everyday substances found in their homes or food and suddenly begin to experience asthma symptoms. About 50 percent of older adults who have asthma are allergic.
We do not know what causes asthma. There is evidence that asthma and allergy are in part determined by heredity.
Several factors may make a person more likely to get adult onset asthma. Women are more likely to develop asthma after age 20. For others, obesity appears to significantly increase the risk of developing asthma as an adult. At least 30 percent of adult asthma cases are triggered by allergies.
Different illnesses, viruses or infections can be a factor in adult onset asthma. Many adults first experience asthma symptoms after a bad cold or a bout with the flu.
Adult onset asthma is not caused by smoking. However, if you smoke or are exposed to cigarette smoke (secondhand smoke), it may provoke asthma symptoms.
Asthma symptoms can include:
There are four key steps to successfully managing asthma:
If your asthma symptoms are caused by allergies, take steps to control known or potential triggers in your environment. Allergy-proof your house for dust, mold, cockroaches and other common indoor allergens to which you are allergic. Reduce your outdoor activities when pollen counts or ozone levels are high. Choose foods that don't contribute to your asthma or allergy symptoms. Evaluate your workplace for possible allergens and take the necessary steps to reduce your exposure to them.
Asthma is usually diagnosed in childhood. In many patients, however, the symptoms will disappear or be significantly reduced after puberty. Around age 20, symptoms may begin to reappear. Researchers have tracked this tendency for reappearing asthma and found that people with childhood asthma tend to experience reappearing symptoms through their 30s and 40s at various levels of severity. Regardless of whether your asthma is active, continue to avoid your known triggers and keep your rescue medications or prescriptions up-to-date and handy in case you need them.
Many adults take several medications and/or use over-the counter medications, such as ibuprofen or aspirin, regularly. Work with your doctor to simplify your medication program as much as possible. Explore the possibility of combining medications or using alternate ones that will have the same desired effect. Be sure to discuss potential drug interactions with anything you take, including vitamins.
Some asthma medications increase heart rate. If you have a heart condition, discuss those side effects with your health care provider. Older "first generation" antihistamines can cause men with enlarged prostates to retain urine. Oral steroids can make symptoms of glaucoma, cataracts and osteoporosis worse.
Adults with arthritis may need special inhalers that are easier to operate. Anyone with asthma should consider getting an annual flu shot. Older adults also should talk with their doctor about getting a pneumonia vaccination. People with multiple medical conditions need to be aware of how their illnesses may affect one another.
You probably think about your child's asthma every day. When you care for a child with asthma, it's important to know what his or her asthma triggers are and to work with your child's health care provider to create an Asthma Action Plan so you will know what to do if your child's asthma symptoms worsen. It's also important to make sure that you know about your child's medication and how to use the devices prescribed or recommended by your child's provider. Be sure to stay informed, and work closely with your child's health care provider to help control your child's asthma symptoms and work toward his or her asthma management goals.
Five important things you can do to help your child manage his or her asthma
If your child is old enough to take part in his or her care, it's important to help your child understand:
Watch the videos in the following link to learn how to properly use an inhaler to get the most out of your child's medicine: http://www.cdc.gov/asthma/inhaler_video/default.htm | http://www.ci.independence.mo.us/Health/MCH-Asthma |
4.1875 | The 3 Little Wolves and the Big Bad Pig: Teaching Opposites
What would you do if a big, bad pig tried to blow down your little brick house?
Run away as the bricks tumbled, just like the three little wolves did in Eugene Trivizas’ story The Three Little Wolves and the Big Bad Pig. As strange and as entertaining as it sounds, this book might be just what you need to liven class up the next time you teach opposites! Here’s how you can use it in your ESL class.
How to Teach Opposites
The Three Little Pigs
Do your students know the story of the three little pigs? As a class, allow students to share anything they already know about the story and retell it if they already know it. If no one knows the story already, ask them what they think might happen based on the title. Once your students have offered some ideas, read the story to them. Ask your students to notice any words that describe the pigs and the wolf as you read. After you finish the story, work with your class to make a list of these descriptive words on the board.
To make sure your students have the story clearly in their minds, ask your students to retell the story in their own words. If your students would like, allow them to illustrate their stories. You might want to let students type up their retellings and illustrate them on the computer. You can print them out and display them on a wall of your classroom.
Next, explain to your class that you are going to talk about antonyms or opposites. Give them several examples of antonym pairs. Take one pair, big and little for example, and write them on opposite ends of the board. Now draw a symbol at each end, one big and one little. Show your students that antonyms are words at opposite ends of a spectrum. Draw several of the same symbol along the spectrum getting increasingly big or little. Point out to your students that the antonyms are the words farthest from one another. As a class, brainstorm as many antonym pairs as you can think of. When you are finished, you may want to have your students illustrate one or more of the other antonym pairs you listed on their own spectrums.
The Big, Bad Pig
Now that your students know the traditional tale and are familiar with antonyms, it is time for the fractured version. Read Trivizas’ The Three Little Wolves and the Big Bad Pig to your class. Ask them to listen for two things as you read. First, challenge them to note any differences between this story and the original version. Second, ask them to note any descriptive words used for the wolves and the pig.
Compare and Contrast
Explain to your students that a Venn diagram is a way to look at the similarities and differences between two things. Show your students how to create a Venn diagram by drawing two overlapping circles on the board. Label one circle “3 little pigs” and the other “3 little wolves”. Ask your students to write the similarities between the two stories in the overlapping section. Then ask them to write the parts unique to each story in its circle.
What Opposites Can You Find?
Looking at the lists of descriptive words, can your students find any opposite pairs among them? Give groups of two to three students some time to work together to find opposites in and between the two stories. You will want to have copies of each text for each group of students. If students are unable to find a pair of opposites for the descriptive words within the text, ask them to think of word that would be the opposite to the ones that were used.
Now that your students have seen and worked with the opposite version of the three little pigs, challenge your students to write their own fractured fairytales! Supply groups of three to five students with some traditional children’s tales. Ask each group to choose one traditional tale and to plan a skit that tells an opposite story. They should write their skit as they prepare. Reassure them that not every element in their skits will be opposite of the original, just as Trivizas’ version of the three little pigs was not a complete opposite. Each skit should, however, have at least one major opposite from its original version. After the groups have planned their skits, have them perform for the rest of the class.
Play day may be a good occasion to have opposite day in your class and celebrate the idea of antonyms.
Do your classes in reverse order! Face your desks to the opposite wall! Read a book from the last page forward or do any of a number of opposite things! Your kids will have fun and they will really understand the concept of opposites!
Want more teaching tips like this?
Get the Entire BusyTeacher Library
Instant download. Includes all 80 of our e-books, with thousands of practical activities and tips for your lessons. This collection can turn you into a pro at teaching English in a variety of areas, if you read and use it. | http://busyteacher.org/11911-teaching-opposites-3-little-wolves-big-bad-pig.html |
4.09375 | Complete th story.
Whose messy room is this? Your child can help clean up this messy room while learning a few position words in the process.
Introduce your youngster to this Japanese version of Pin the Tail on the Donkey.
Forget store-bought dyes this Easter: color your Easter eggs with natural dyes made with ingredients from your own kitchen in this 1st grade activity.
Does your child need a head start in creative writing? Give her this story starter worksheet, and she'll help Marnie figure out how to bake her muffins!
Do fish drive around from place to place? Have a fun storytelling adventure with your child and create an imaginative story to go with this picture.
This reading exercise uses interactive story writing; it's a great way to look at reading comprehension from a different angle.
Story webs organize a story into one main idea and several details. Help your second grader read the story, then analyze it using the story web.
This story starter will help your child learn about story structure and plot. She'll read the beginning and the end, filling in the parts that are missing. | http://www.education.com/collection/irinavlad/complete-story/ |
4.4375 | Expression (computer science)
An expression in a programming language is a combination of one or more explicit values, constants, variables, operators, and functions that the programming language interprets (according to its particular rules of precedence and of association) and computes to produce ("to return", in a stateful environment) another value. This process, as for mathematical expressions, is called evaluation. The returned value can be of various types, such as numerical, string, and logical.
2+3 is an arithmetic and programming expression which evaluates to 5. A variable is an expression because it denotes a value in memory, so
y+6 is an expression. An example of a relational expression is
4≠4, which evaluates to false.
In C and most C-derived languages, a call to a function with a void return type is a valid expression, of type void. Values of type void cannot be used, so the value of such an expression is always thrown away.
In many programming languages a function, and hence an expression containing a function, may have side effects. An expression with side effects does not normally have the property of referential transparency. In many languages (e.g. C++), expressions may be ended with a semicolon (
;) to turn the expression into an expression statement. This asks the implementation to evaluate the expression for its side-effects only and to disregard the result of the expression (e.g. "x+1;") unless it is a part of an expression statement that induces side-effects (e.g. "y=x+1;" or "func1(func2());").
- Statement (computer science) (contrast)
- Boolean expression
- Expression (mathematics)
- Evaluation strategy | https://en.wikipedia.org/wiki/Expression_(programming) |
4.125 | Most viruses (e.g. influenza and many animal viruses) have viral envelopes covering their protective protein capsids. The envelopes are typically derived from portions of the host cell membranes (phospholipids and proteins), but include some viral glycoproteins. They may help viruses avoid the hostimmune system. Glycoproteins on the surface of the envelope serve to identify and bind to receptor sites on the host's membrane. The viral envelope then fuses with the host's membrane, allowing the capsid and viral genome to enter and infect the host.
The cell from which the virus itself buds will often die or be weakened and shed more viral particles for an extended period. The lipid bilayer envelope of these viruses is relatively sensitive to desiccation, heat, and detergents, therefore these viruses are easier to sterilize than non-enveloped viruses, have limited survival outside host environments, and typically must transfer directly from host to host. Enveloped viruses possess great adaptability and can change in a short time in order to evade the immune system. Enveloped viruses can cause persistent infections. | https://en.wikipedia.org/wiki/Viral_envelope |
4.125 | Read the story or poem or watch the video. See links above.
Introduction: All the children sit in a circle. The teacher asks them what a troll looks like? Get the children to express their thoughts and ideas freely.
Role on the wall: Give an outline of an image and ask the children to write inside the image the different characteristics or personality traits of a troll. If they are too young to write, get them to draw inside the image.
Group work: Divide the class in to smaller groups of 5 or 6 children. Each group works together to create the troll with their bodies.
Suggestions: One of them could be head, the others could be the bodies or the legs. They could be two heads, 10 legs, four hands, etc.., Each group should be different.
Then ask each group to move around the room as the troll. The group should stay connected as they walk. Once they have mastered the movement they can make sound.
Still Image: Get each group to make a still image of the troll. He should look as fierce and as scary as possible.
Teacher in Role: The teacher assumes the role as the troll. She can do this by changing her voice or using a prop or putting on a costume. She sits on a seat which becomes the hot seat.
Hot Seating: Each child in the class asks the troll a question.
Suggestions: Why does the troll live by himself?
Where is his family?
Why does he not like the Billy Goats?
Does he not have any friends?
Why does he live under a bridge?
Voice Production (Pitch and Power): Divide the class into groups of three. They each must assume the role of one of the Billy Goats. They should experiment with the pitch and power of each of the billy goats.
The smallest goat should have a soft and high-pitched voice.
The middle size goat should have a medium volume and medium-pitched voice.
The biggest goat should have a loud and low-pitched voice.
Give each group time to find their voices.
Choral speaking: Get each group to practice saying the following together:
- “Please, Mr Troll, may we cross the bridge so we can graze on the green grassy ridge.”
Get them to say it first as the smallest goat, then the middle sized goat and then finally the biggest goat.
Thought tracking: The teacher tell each group they are going to cross the bridge. She taps each goat on the shoulder and they must say how they feel about crossing the bridge and confronting the goat. The teacher can extend this by asking each goat what they will say to the troll.
Conscience Alley: The class forms two lines facing each other. The line on the left must think of reasons why the troll should eat the Billy Goats. The line on the right should think of reasons why the troll shouldn’t eat the Billy Goats.
TIR – teacher walks down the centre of the line as the troll and she listen to each reason carefully.
Improvisation: Divide the class into pairs. One child is the biggest billy goat and the other is the troll. They must come up with alternative ending. The goat doesn’t throw the troll into the river. They can act out an alternative and most positive ending. | http://drama-in-ecce.com/category/drama-activities-for-children/ |
4 | Edmond Halley, the English astronomer who computed the orbit and named the famous Halley’s Comet, was born on this date in 1656. Today, in honor of Halley’s birthday, a special logo appears on Google’s UK home page.
Halley was the second Astronomer Royal and contributed to the scientific knowledge of the day in a number of ways.
He convinced Sir Issac Newton to publish Philosophiæ Naturalis Principia Mathematica (1687), which was published at Halley’s expense. He also created one of the first working models of a magnetic compass and his mathematics skills helped develop actuarial tables for life insurance.
He also commanded the first scientific voyage by a British vessel that tested the compass and his study of terrestrial magnetism.
“In 1705, applying historical astronomy methods, he published Synopsis Astronomia Cometicae, which stated his belief that the comet sightings of 1456, 1531, 1607, and 1682 related to the same comet, which he predicted would return in 1758. Halley did not live to witness the comet’s return, but when it did, the comet became generally known as Halley’s Comet,” Wikipedia stated.
Halley’s comet is expected back in 2061 and was last seen in 1986. | https://searchenginewatch.com/sew/news/2123603/halleys-comet-appears-uk-google-logo |
4.3125 | Lonnie Bunch, museum director, historian, lecturer, and author, is proud to present A Page from Our American Story, a regular on-line series for Museum supporters. It will showcase individuals and events in the African American experience, placing these stories in the context of a larger story — our American story.
A Page From Our American Story
On March 6, 1857, in the case of Dred Scott v. John Sanford, United States Supreme Court Chief Justice Roger B. Taney ruled that African Americans were not and could not be citizens. Taney wrote that the Founders’ words in the Declaration of Independence, “all men were created equal,” were never intended to apply to blacks. Blacks could not vote, travel, or even fall in love and marry of their own free will — rights granted, according to the Declaration, by God to all. It was the culmination of ten years of court battles — Dred Scott’s fight to live and be recognized as a free man.
The High Court’s decision went even further, declaring laws that restricted slavery in new states or sought to keep a balance between free and slave states, such as the Missouri Compromise, were unconstitutional. In essence, Black Americans, regardless of where they lived, were believed to be nothing more than commodities.
The Taney court was dominated by pro-slavery judges from the South. Of the nine, seven judges had been appointed by pro-slavery Presidents — five, in fact, came from slave-holding families. The decision was viewed by many as a victory for the Southern “Slavocracy,” and a symbol of the power the South had over the highest court.
The dramatic ripple effect of Dred Scott — a ruling historians widely agree was one of the worst racially-based decisions ever handed down by the United States Supreme Court — reached across the states and territories. It sent shivers through the North and the free African-American community. Technically, no black was free of re-enslavement.
Free Blacks, many of whom had been in Northern states for years, once again lived in fear of being hunted down and taken back to the South in servitude. Southern slave laws allowed marshals to travel north in search of escaped slaves. The ruling was such a concern to Free Blacks, that many seriously considered leaving the United States for Canada or Liberia.
The decision played a role in propelling Abraham Lincoln — an outspoken anti-slavery voice — into the White House. The slavery issue had already created a turbulent, volatile atmosphere throughout the nation. Dred Scott, like kerosene tossed onto a simmering fire, played a significant role in igniting the Civil War. The North became ready to combat what it viewed as the South’s disproportionate influence in government.
The court case lives in infamy today, but few people know much about the actual people involved. I suspect Scott and Taney never imagined they would play such powerful roles in our great American story.
Taney was from Maryland, a slave state, but had long before emancipated his slaves and reportedly paid pensions to his older slaves, as well. As a young lawyer he called slavery a “blot on our national character.” What turned Taney into a pro-slavery advocate is not clear, but by 1857, Taney had hardened, going as far as to declare the abolitionist movement “northern aggression.”
It is reported that Dred Scott was originally named “Sam” but took the name of an older brother when that brother died at a young age. Scott was born into slavery in Virginia around 1800 (birth dates for slaves were often unrecorded), and made his way westward with his master, Peter Blow. By 1830, Scott was living in St. Louis, still a slave to Blow. He was sold to Army doctor John Emerson in 1831 and accompanied him to his various postings — including stations in Illinois and the Wisconsin Territory (what is now Minnesota).
In 1836, Scott married Harriett Robinson. Reports vary on whether she was a slave of Emerson’s prior to the marriage or Emerson purchased her from another military officer after she and Scott had fallen in love. The series of events underscored the painful and difficult lives slaves led. Love, like everything else, was subject to the vagaries of their owners’ dispositions.
Emerson died in 1843, leaving the Scott family to his wife, Irene. Three years later, Scott tried to buy his freedom, but to no avail. Scott’s only recourse was to file suit against Mrs. Emerson. He did so on April 6, 1846, and the case went to a Missouri court the following year. He would lose this case, but win on appeal in 1850. Emerson won her appeal in 1852, and shortly afterward gave the Scotts to her son, John Sanford, a legal resident of New York. Because two states were now involved, Scott’s appeal was filed in federal court in 1854 under the case name of Dred Scott v. John Sanford, the name that came before Taney in 1857.
History is filled with dramatic and strange twists of irony and fate. Those factors can be found throughout Scott’s battle for freedom. Peter Blow’s sons, childhood friends of Scott’s, paid his legal fees. Irene Emerson had remarried in 1850. Her new husband, Massachusetts Congressman Calvin Chaffee, was anti-slavery. Following Taney’s ruling, the now-Mrs. Calvin Chaffee, took possession of Dred, Harriett and their two daughters and either sold or simply returned the family to the Blows. In turn, the Blows freed the Scotts in May, 1857.
Dred Scott, a man whose name is so deeply-rooted in our history, so linked to the war that would end slavery, would die just five months later of tuberculosis. However, he died a free man.
All the best,
Today, 750 million people around the world live without access to clean water. This crisis disproportionately affects women, who walk a combined 200 million hours a day to collect water for their families. Stella Artois is supporting Water.org to help solve the global water crisis. Learn how you can help at http://BuyALadyADrink.com
“Who controls the past controls the future. Who controls the present controls the past.”
oh yeah, it’s a rant …
a repost `2013
So, we are in the 21st Century, Women have a constitutional right to have an abortion yet folks like Rep.Trent Franks act as if they know what is best for all Women, just think about that . Why is he throwing ALL Women into to one basket? The fact is Women lead very different lives make individual decisions every minute … duh and an abortion is just one of several health care issues Women may have to encounter. The best solutions are Birth Control in all its forms as well as safe affordable legal constitutional right to abortion. I find it beyond offensive to hear Republicans infer that an abortion is chosen carelessly and for those who seem to think birth control in all its forms is a federal or states right issue actually use it as a Republican political football. The fact is that Republicans with Women in their lives forget that their position pushes up against 98% of those who use birth control. They need to stop and focus on Jobs, Immigration, ending any idea of Shutdowns, Climate Change among just a few. I say until republicans come to their senses, which honestly doesn’t look at all possible, vote for the Democratic Party that supports upward mobility as well as the middle lower classes and the poor = equality
I came across an article by Dave Thompson from The Denver Post. It appears as though the mission to demean control and or shutter a Woman’s right to choose is alive and well as North Dakota‘s Senate successfully passed what is called the” heart beat” bill, my first response was … say Wha? We have just entered the twilight zone or maybe daylight savings has caused some chaos in North Dakota but then I read on and found that…
“Senators also approved a second bill that bans abortions based solely on genetic abnormalities, the first state ban of its kind if signed into law. The bill would also ban abortions based on the gender of the fetus, which would make North Dakota the fourth state to ban sex-selection based abortions.” All on Friday!
Call N.D. Republican Governor, Jack Dalrymple and ask him why ? 701 328 2200 ask him why assume Women are ill-equipped, silly, naïve or would put up with abortion bans without a fight
for the complete article click on the link below
Hey, whatever happened to taking” liberty” under the Bill of Rights and “freedom” under Civil Liberties seriously – and both could be at risk
On Sunday, January 22, 2012, President Obama released a statement letting Women know that he is reaffirming his promise to protect a woman’s right to choose. Announcing that “After evaluating comments, we have decided to add an additional element to the final rule Nonprofit employers who, based on religious beliefs, do not currently provide contraceptive coverage in their insurance plan, will be provided an additional year, until August 1, 2013, to comply with the new law. Employers wishing to take advantage of the additional year must certify that they qualify for the delayed implementation. This additional year will allow these organizations more time and flexibility to adapt to this new rule. We intend to require employers that do not offer coverage of contraceptive services to provide notice to employees, which will also state that contraceptive services are available at sites such as community health centers, public clinics, and hospitals with income-based support. We will continue to work closely with religious groups during this transitional period to discuss their concerns.”
There have been changes to the announcement above as well as big changes to health care for women … in a good way and more to come. If you don’t know, please know that because of the new health care law women can now look forward to less discrimination and you do not have to be poor to benefit … can I just say that again, women will NOT be discriminated against anymore. We know some in the insurance field, doctors and or hospitals will try to beat the system, but the law is there to refer to now and covers All Americans not just some. It is hard for me to believe pro-lifers do not understand that every part of a woman’s health is subject to being penalized and that includes reproductive health care, which includes a wide range of health care issues. It is bad enough that lawmakers actually would subject women to demeaning practices like undergo a transvaginal scope; make them wait 72hrs, but to make doctors liable for jail time too. I have to say that among other ridiculous laws that need a vote in Congress, The Hyde Amendment requires a vote every year … the Hyde Amendment is a legislative provision barring the use of certain federal funds to pay for abortions with exceptions for incest and rape. It is not a permanent law, rather it is a “rider” that, in various forms, has been routinely attached to annual appropriations bills since 1976. The Hyde Amendment applies only to funds allocated by the annual appropriations bill for the Department of Health and Human Services. It primarily affects Medicaid. wiki
I also admit that it pisses me off that the latest group of people in office are still getting away with saying one thing and do another which includes forcing their “family values” platform/ideology on what I thought were free Americans. What year is it again? If the Republican Tea Party truly wants smaller government, they should stop trying to control women, their bodies and or change laws for the sake of that “family values” platform that is definitely the epitome of big government and an invasion of privacy.
The right seems to be aligning their demands for stricter abortion laws one state at a time. I cannot be the only one tired of the “Do as we say Not as we do Political Party of NO. It has my blood boiling. Now, Tea publicans running for President and some media folks are saying it is time to move on from nasty politics. I say if you want to become President of the US of A give Americans full disclosure. Women need to know if you support unnecessary procedures like a transvaginal scope … Yet; the same people accuse President Obama of withholding information from the public or being un-American get offended when asked to provide personal information. We are their constituents; we all deserve to know how these people will vote on issues of religion, race, gender, and or abortion. The beliefs of members of Congress dictate to how the vote will affect our constitutional rights. If you were listening, for three years conservative politicians, some conservadems ramping up of vitriolic “family values” rhetoric pushing the discussion of women’s rights, religion, race and gender preference up to the surface to rile their base. It is obvious now that Republican Governors had a plan to take the rhetoric a step further by passing anti-abortion legislation all over the country in fact as stated by NWLC – “Ninety-two. That’s the number of anti-abortion measures passed into law across the U.S. in 2011. In addition, in case you are wondering, yes, that is a record — in fact; it is over 2.5 times the previous record. “
Bad enough that in the year 2014, Women must continue to fight for our rights let alone safe affordable access to reproductive health care. Now, as we move into 2015 with the 114th Congress controlled by Republicans who decided to vote on a right to reproductive rights on the anniversary of Roe V Wade. I can’t write what I am thinking about this right now. This is incredible since there are more female members of Congress, approximately 101, yes, mostly tea party members who say they are fiscally conservative, want less government in their (our) lives. Yet, topics like abortion, stem cell research/experiments and religious freedom have them not just flustered but have their undies in a bunch about abortion funding and seem to be moving to have abortion outlawed altogether if possible. I could not vote for a woman who feels I am not qualified, mature enough or have no right to choose no matter what side of the political aisle they sit. The fact is, women who choose to have an abortion, do so with trepidation not just because they are heartless but out of viable options or the fetus is not viable or at risk or both mom and fetus are at risk and FYI the decision is discussed with a counselor/doctor. The choice to have an abortion is not an easy one and offering a safe procedure is better than having a woman or women desperate enough to take actions that could put her life at risk- is moral …it’s the right thing to do. The idea that any member of Congress would want to control a woman’s body is ludicrous at best and again, the epitome of BIG Government because they should accept The Hyde – Amendment as the law. It makes me want to scream hey, stay out of our lives we are in the 21st Century. The Tea publican ideology clearly barbaric; spews old school dogma and not only crosses the line, has solidified a need, a call for an unprecedented effort for a grassroots movement to keep our Democracy safe
If you live under a Republican controlled State and need or know someone in need of safe affordable healthcare with limited funds, your life has got to be beyond difficult. Now, imagine the impact that repealing, replacing and eliminating access would have on ALL our families, friend’s or co-workers. Let alone the idea that some Republicans want to go back to a time when women and people of colour had no rights; seen but not heard and yes it sounds silly but before you laugh, take some time and listen to Tea publicans running for office closely.
Just when I thought we were all moving into the 21st century … sigh
Resource : wiki on 1) 1
So, why did I go to urban dictionary for the definition of Feminism?
I got my Cosmo in the mail and while the fashions are fun some gaudy others worthy of a second look or two most are out of my price and age range, but when I see hair and beauty products well now that is a whole different response entirely. As I was thumbing through one of many magazines, which is another bad habit, an article about feminism popped up and yes folks are questioning Beyoncé among others with headlines such as … “Can you be Sexy and a Feminist” or as Cosmo asks, “Can you be a Sexy Feminist? It was a quick read and in all honesty I don’t spend a whole lot of my time dissecting labels, but I will say that being a feminist used to be defined as a woman who didn’t appreciate men some said they despised them. We were advised to always question the roles of men & women, demanded equal access to education, suggested being a companion forget about being happily married least we acquiesce simply because we are women. I don’t subscribe to hating on men, I like men on several levels, that includes my dad, my kids father, my son and a couple of boss’ who happened to be male. As a side note on a political level, Republican men are the bane of our existence in my opinion. When it comes to being a participant, I have to admit, I too, have danced to fabulous music with either or both having misogynistic and chauvinistic words. It’s definitely not something I used to think about while dancing, but I have gotten upset when it became clear what is being said; generally this kind of talk would get a whole different response if these words were being exchanged through a conversation. However, it does appear that the word feminism and or being a feminist in this 21st society is ever changing ever evolving to being about a belief in equality and the rights of everyone in all its forms and genders. I see urban dictionary as being a place not only run by a younger group of folks but who use it and research the “stuff” they post. As you read on, Cosmo asked stars like lady gaga, lana del rey and taylor swift just to name a few, but when Pharrell was asked he stated, “I don’t think it’s possible for me to be (a feminist). I’m a man, but I do support feminists.”
Anyway, an article worth reading in Cosmo September 2014 ~~ Nativegrl77
What do you think? Is being a feminist gender specific? | http://beaseedforchange.org/tag/united-states/ |
4.0625 | Little House in the Big Woods Teacher Resources
Find Little House in the Big Woods educational ideas and activities
Showing 1 - 20 of 47 resources
Little House in the Big Woods
Strengthen your learners' relationship with Laura Ingalls Wilder's classic novel of a pioneer family with these materials. Multiple choice comprehension questions and a set of reflection prompts are provided for each pair of chapters in...
3rd - 4th English Language Arts
Little House In the Big Woods
Students explore economics by reading classic literature with their classmates. In this farm production activity, students read the famous story Little House in the Big Woods by Laura Ingalls Wilder. Students complete handouts based upon...
5th - 7th English Language Arts
Little House in the Big Woods Reading Activity
Students use text and technology to access information about history as seen by Laura Ingalls Wilder in her book Little House in the Big Woods. They read book, research facts related to the book, and give their interpretation of the...
5th - 6th English Language Arts
Is Laura "Sitting" in the "Setting" of "Little House?"
Third graders relate the vowel sounds of short "e" and short "i" to their most common spellings using the book "Little House in the Big Woods" by Laura Ingalls Wilder. They write dictated sentences, watch a PowerPoint presentation,...
3rd English Language Arts
Spelling and Punctuation: Dictation From Little House in the Big Woods
In this dictation worksheet, students listen to a teacher read a short passage from Little House In The Big Woods by Laura Ingalls Wilder. Students write the passage as it is dictated, paying attention to spelling and punctuation....
4th - 5th English Language Arts
Little House in the Big Woods Vocabulary: Chapter 7
In this Little House in the Big Woods instructional activity, students write the number of the correct vocabulary word on the line before its provided definition. All five words can be found in Chapter 7 of Little House in the Big Woods.
2nd - 5th English Language Arts
Author Study: Laura Ingalls Wilder
Students read novel, Little House in the Big Woods, explore web sites and other resources devoted to author, Laura Ingalls Wilder, complete Venn Diagram showing ways they and author are alike and different, and create diorama, read...
7th - 8th English Language Arts | http://www.lessonplanet.com/lesson-plans/little-house-in-the-big-woods |
4.125 | Standards in this strand:
Demonstrate understanding of the organization and basic features of print.
Follow words from left to right, top to bottom, and page by page.
Recognize that spoken words are represented in written language by specific sequences of letters.
Understand that words are separated by spaces in print.
Recognize and name all upper- and lowercase letters of the alphabet.
Demonstrate understanding of spoken words, syllables, and sounds (phonemes).
Recognize and produce rhyming words.
Count, pronounce, blend, and segment syllables in spoken words.
Blend and segment onsets and rimes of single-syllable spoken words.
Isolate and pronounce the initial, medial vowel, and final sounds (phonemes) in three-phoneme (consonant-vowel-consonant, or CVC) words.1 (This does not include CVCs ending with /l/, /r/, or /x/.)
Add or substitute individual sounds (phonemes) in simple, one-syllable words to make new words.
Phonics and Word Recognition:
Know and apply grade-level phonics and word analysis skills in decoding words.
Demonstrate basic knowledge of one-to-one letter-sound correspondences by producing the primary sound or many of the most frequent sounds for each consonant.
Associate the long and short sounds with the common spellings (graphemes) for the five major vowels.
Read common high-frequency words by sight (e.g., the, of, to, you, she, my, is, are, do, does).
Distinguish between similarly spelled words by identifying the sounds of the letters that differ.
Read emergent-reader texts with purpose and understanding.
1 Words, syllables, or phonemes written in /slashes/refer to their pronunciations or phonology. Thus,/CVC/ is a word with three phonemes regardless of the number of letters in the spelling of the word. | http://www.corestandards.org/ELA-Literacy/RF/K/ |
4 | The answer to any math problem depends on upon the question being asked. In most math problems, one needs to determine a missing variable. For instance, if a problem reads as 2+3 = , one needs to figure out what the number after the equals sign should be.Know More
In other cases, one may see a number after the equals sign but a letter or two letters in the middle of the problem. That looks like the following: 2+a=5. In questions like this one, the answer is the number that accurately replaces the letter.
Many math problems are story problems. In order to solve a story problem, one must create an equation based on the information presented in the story, and they must solve for the variable in that equation.Learn more about Arithmetic
The numbers to add in an addition problem are called addends, summands or terms, while the answer to the problem is the sum. In the number sentence a+b=c, a and b are addends, while c is the sum.Full Answer >
A net change in math is the total of all of the changes completed throughout the solving of a problem. The net change is reflected in a numerical amount and can be positive, negative or at zero.Full Answer >
One funny math problem is: "I am an odd number. Take away one letter and I become even. What number am I?" The answer is seven. The humor in this problem becomes apparent when the solver recognizes that the mathematics of odd and even numbers has been mixed with wordplay.Full Answer >
The math answers to all problems can not be contained in a succinct answer. Each math problem or equation has its own answer and must be individually solved. There are numerous online resources to assist with solving any given problem.Full Answer > | http://www.ask.com/math/answer-math-problem-4a4b7fc070e47712 |
4.03125 | How to define circumscribed and inscribed circles and polygons relative to each other.
How to calculate the measure of an inscribed angle.
How to define a polygon, how to distinguish between concave and convex polygons, how to name polygons.
How to define the apothem and center of a polygon; how to divide a regular polygon into congruent triangles.
Naming polygons, classifying triangles, and classifying quadrilaterals
How to determine the number of diagonals in a polygon.
How to find the sum of the exterior angles in a polygon and find the measure of one exterior angle in an equiangular polygon.
How to derive the formula to find the sum of angles in any polygon.
How to derive the formula to calculate the area of a regular polygon.
How to find the measure of one angle in any equiangular or regular polygon.
How to convert between length and area ratios of similar polygons
How to calculate the surface area of any pyramid, emphasizing regular polygons as bases.
How to prove that an angle inscribed in a semicircle is a right angle; how to solve for arcs and angles formed by a chord drawn to a point of tangency.
How to identify if two figures are similar.
How to find the length of tangent segments drawn to a circle from the same point.
How to calculate the area between a square and an inscribed circle.
How to find the volume of any pyramid.
How to prove that opposite angles in a cyclic quadrilateral are congruent; how to prove that parallel lines create congruent arcs in a circle.
How to determine if two triangles in a circle are similar and how to prove that three similar triangles exist in a right triangle with an altitude.
How the vocabulary word vertex applies to different objects in Geometry. | https://www.brightstorm.com/tag/inscribed-polygon/ |
4 | This is a series of 10 short videos, hosted by National Science Foundation, each featuring scientists, research, and green technologies. The overall goal of this series is to encourage people to ask questions and look beyond fossil fuels for innovative solutions to our ever-growing energy needs.
This narrated slide show gives a brief overview of coral biology and how coral reefs are in danger from pollution, ocean temperature change, ocean acidification, and climate change. In addition, scientists discuss how taking cores from corals yields information on past changes in ocean temperature.
This video shows where and how ice cores are extracted from the West Antarctic Ice Sheet (WAIS), cut, packaged, flown to the ice core storage facility in Denver, further sliced into samples, and shipped to labs all over the world where scientists use them to study indicators of climate change from the past.
C-Learn is a simplified version of the C-ROADS simulator. Its primary purpose is to help users understand the long-term climate effects (CO2 concentrations, global temperature, sea level rise) of various customized actions to reduce fossil fuel CO2 emissions, reduce deforestation, and grow more trees. Students can ask multiple, customized what-if questions and understand why the system reacts as it does.
In this activity, students use Google Earth and information from several websites to investigate some of the consequences of climate change in polar regions, including the shrinking of the ice cap at the North Pole, disintegration of ice shelves, melting of Greenland, opening of shipping routes, effects on polar bears, and possible secondary effects on climate in other regions due to changes in ocean currents. Students learn to use satellite and aerial imagery, maps, graphs, and statistics to interpret trends accompanying changes in the Earth system. | https://www.climate.gov/teaching/resources/education/high-school-9-12?keywords=&page=8 |
4.125 | Findings by Scripps scientists cast new light on undersea volcanoes
Study in Science may help change the broad understanding of how they are formed
Researchers at Scripps Institution of Oceanography at the University of California, San Diego, have produced new findings that may help alter commonly held beliefs about how chains of undersea mountains formed by volcanoes, or "seamounts," are created. Such mountains can rise thousands of feet off the ocean floor in chains that span thousands of miles across the ocean.
Since the mid-20th century, the belief that the earth's surface is covered by large, shifting plates---a concept known as plate tectonics--has shaped conventional thinking on how seamount chains develop. Textbooks have taught students that seamount patterns are shaped by changes in the direction and motion of the plates. As a plate moves, stationary "hot spots" below the plate produce magma that forms a series of volcanoes in the direction of the plate motion.
Now, Anthony Koppers and Hubert Staudigel of Scripps have published a study that counters the idea that hot spots exist in fixed positions. The paper in the Feb. 11 issue of Science shows that hot spot chains can change direction as a result of processes unrelated to plate motion. The new research adds further to current scientific debates on hot spots and provides information for a better understanding of the dynamics of the earth's interior.
To investigate this phenomenon, Staudigel led a research cruise in 1999 aboard the Scripps research vessel Melville to the Pacific Ocean's Gilbert Ridge and Tokelau Seamounts near the international date line, a few hundred miles north of American Samoa and just south of the Marshall Islands.
Gilbert and Tokelau are the only seamount trails in the Pacific that bend in sharp, 60-degree angles--comparable in appearance to hockey sticks--similar to the bending pattern of the Hawaii-Emperor seamount chain (which includes the Hawaiian Islands).
Assuming that these three chains were created by fixed hot spots, the bends in the Gilbert Ridge and Tokelau Seamounts should have been created at roughly the same time period as the bend in the Hawaii-Emperor chain, the conventional theory holds.
Koppers, Staudigel and a team of student researchers aboard the Melville spent six weeks exploring the ocean floor at Gilbert and Tokelau. They used deep-sea dredges to collect volcanic rock samples from the area.
For the next several years, Koppers used laboratory instruments to analyze the composition of the rock samples and calculate their ages.
"It was quite a surprise that we found the Gilbert and Tokelau seamount bends to have completely different ages than we expected," said Koppers, a researcher at the Cecil H. and Ida M. Green Institute of Geophysics and Planetary Physics at Scripps. "We certainly didn't expect that they were 10 and 20 million years older than previously thought."
Instead of forming 47 million years ago, as did the Hawaiian-Emperor bend, the Gilbert chain was found to be 67 million years old and the Tokelau 57 million years old.
"I think this really hammers it in that the origin of the alignment of these seamount chains may be much more complicated than we previously believed, or the alignment may not have anything to do with plate motion changes," said Staudigel.
Although they do not have positive proof as yet, Koppers and Staudigel speculate that local stretching of the plate may allow magma to rise to the surface or that hot spots themselves might move. Together with plate motion, these alternate processes may be responsible for the resulting pattern of seamounts.
Koppers and Staudigel will go to sea again next year to seek additional clues to the hot spot and seamount mysteries.
"Seamount trails are thousands of kilometers long and even if we are out collecting for several weeks, we still only cover a limited area," said Koppers. "One of the things holding us back in developing a new theory is that the oceans are humongous and our database is currently very small we are trying to understand a very big concept."
Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009
Published on PsychCentral.com. All rights reserved. | http://psychcentral.com/news/archives/2005-02/uoc--fbs021005.html |
4.25 | Orbit of the Moon
- Not to be confused with Lunar orbit (the orbit of an object around the Moon).
|Diagram of the Earth–Moon system|
The Moon orbits Earth in the prograde direction and completes one revolution relative to the stars in approximately 27.322 days (a sidereal month). Earth and the Moon orbit about their barycentre (common center of mass), which lies about 4600 km from Earth's center (about three quarters of the radius of Earth). On average, the Moon is at a distance of about 385000 km from Earth's center, which corresponds to about 60 Earth radii. With a mean orbital velocity of 1.022 km/s, the Moon moves relative to the stars each hour by an amount roughly equal to its angular diameter, or by about 0.5°. The Moon differs from most satellites of other planets in that its orbit is close to the plane of the ecliptic, and not to Earth's equatorial plane. The plane of the lunar orbit is inclined to the ecliptic by about 5.1°, whereas the Moon's spin axis is inclined by only 1.5°.
The orbit of the Moon is distinctly elliptical, with an average eccentricity of 0.0549. The non-circular form of the lunar orbit causes variations in the Moon's angular speed and apparent size as it moves towards and away from an observer on Earth. The mean angular movement relative to an imaginary observer at the barycentre is ° per day to the east (Julian Day 2000.0 rate). 13.176
The Moon's elongation is its angular distance east of the Sun at any time. At new moon, it is zero and the Moon is said to be in conjunction. At full moon, the elongation is 180° and it is said to be in opposition. In both cases, the Moon is in syzygy, that is, the Sun, Moon and Earth are nearly aligned. When elongation is either 90° or 270°, the Moon is said to be in quadrature.
The orientation of the orbit is not fixed in space, but rotates over time. This orbital precession is also called apsidal precession and is the rotation of the Moon's orbit within the orbital plane, i.e. the axes of the ellipse change direction. The Moon's major axis – the longest diameter of the orbit, joining its nearest and farthest points, the perigee and apogee, respectively – makes one complete revolution every 8.85 Earth years, or 3,232.6054 days, as it rotates slowly in the same direction as the Moon itself (direct motion). The Moon's apsidal precession is distinct from, and should not be confused with its axial precession.
The mean inclination of the lunar orbit to the ecliptic plane is 5.145°. The rotational axis of the Moon is also not perpendicular to its orbital plane, so the lunar equator is not in the plane of its orbit, but is inclined to it by a constant value of 6.688° (this is the obliquity). This does not mean that, as a result of the precession of the Moon's orbital plane, the angle between the lunar equator and the ecliptic would vary between the sum (11.833°) and difference (1.543°) of these two angles, but, as was discovered by Jacques Cassini in 1722, the rotational axis of the Moon precesses with the same rate as its orbital plane, but is 180° out of phase (see Cassini's Laws). Therefore, the angle between the ecliptic and the lunar equator is always 1.543°, even though the rotational axis of the Moon is not fixed with respect to the stars.
Because of the inclination of the moon's orbit, the moon is above the horizon at the North and South Pole for almost two weeks every month, even though the sun is below the horizon for six months at a time. The period from moonrise to moonrise at the poles is quite close to the sidereal period, or 27.3 days. When the sun is the furthest below the horizon (mid winter), the moon will be full when it is at its highest point.
The moon's light is used by zooplankton in the Arctic when the sun is below the horizon for months and must have been helpful to the animals that lived in Arctic and Antarctic regions when the climate was warmer.
The nodes are points at which the Moon's orbit crosses the ecliptic. The Moon crosses the same node every 27.2122 days, an interval called the draconic or draconitic month. The line of nodes, the intersection between the two respective planes, has a retrograde motion: for an observer on Earth it rotates westward along the ecliptic with a period of 18.60 years, or 19.3549° per year. When viewed from celestial north, the nodes move clockwise around Earth, opposite to Earth's own spin and its revolution around the Sun. Lunar and solar eclipses can occur when the nodes align with the Sun, roughly every 173.3 days. Lunar orbit inclination also determines eclipses; shadows cross when nodes coincide with full and new moon, when the Sun, Earth, and Moon align in three dimensions.
Every 18.6 years, the angle between the moon's orbit and the earth's equator reaches a maximum of 28°36′ (the sum of the Earth's inclination 23°27′ and the Moon's inclination 5°09′). This is called major lunar standstill. Around this time, the moon's latitude will vary from −28°36′ to +28°36′. 9.3 years later, the angle between the moon's orbit and the earth's equator reaches its minimum, 18°20′. This is called a minor lunar standstill.
When the inclination of the moon's orbit to the earth's equator is at its minimum of 18°20′, the centre of the moon's disk will be above the horizon every day as far north and as far south as 90°−18°20' or 71°40' latitude, whereas when the inclination is at its maximum of 28°36' the centre of the moon's disk will only be above the horizon every day for latitudes less than 90°−28°36' or 61°24'. At higher latitudes there will be a period of at least a day each month when the moon does not rise, but there will also be a period of at least a day each month during which the moon does not set. This is similar to the behavior of the sun, but with a period of 27.3 days instead of 365 days. Note that a point on the moon can actually be visible when it is below the horizon by about 34 arc minutes, due to refraction (see Sunrise).
Scale model of the Earth–Moon system: Each pixel represents 500 km (310 mi). Sizes and distances are to scale.
History of observations and measurements
About 3,000 years ago, the Babylonians were the first human civilization to keep a consistent record of lunar observations. Clay tablets from that period, which have been found over the territory of present-day Iraq, are inscribed with cuneiform writing recording the times and dates of moonrises and moonsets, the stars that the Moon passed close by, and the time differences between rising and setting of both the Sun and the Moon around the time of the full moon. Babylonian astronomy discovered the three main periods of the Moon's motion and used data analysis to build lunar calendars that extended well into the future. This use of detailed, systematic observations to make predictions based on experimental data may be classified as the first scientific study in human history. However, the Babylonians seem to have lacked any geometrical or physical interpretation of their data, and they could not predict future lunar eclipses (although "warnings" were issued before likely eclipse times).
Ancient Greek astronomers were the first to introduce and analyze mathematical models of the motion of objects in the sky. Ptolemy described lunar motion by using a well-defined geometric model of epicycles and evection.
|Sidereal month||66227.321||with respect to the distant stars (13.36874634 passes per solar orbit)|
|Synodic month||58929.530||with respect to the Sun (phases of the Moon, 12.36874634 passes per solar orbit)|
|Tropical month||58227.321||with respect to the vernal point (precesses in ~26,000 years)|
|Anomalistic month||55027.554||with respect to the perigee (precesses in 3232.6054 days = 8.850578 years)|
|Draconic month||22127.212||with respect to the ascending node (precesses in 6793.4765 days = 18.5996 years)|
There are several different periods associated with the lunar orbit. The sidereal month is the time it takes to make one complete orbit around Earth with respect to the fixed stars. It is about 27.32 days. The synodic month is the time it takes the Moon to reach the same visual phase. This varies notably throughout the year, but averages around 29.53 days. The synodic period is longer than the sidereal period because the Earth–Moon system moves in its orbit around the Sun during each sidereal month, hence a longer period is required to achieve a similar alignment of Earth, the Sun, and the Moon. The anomalistic month is the time between perigees and is about 27.55 days. The Earth–Moon separation determines the strength of the lunar tide raising force.
The draconic month is the time from ascending node to ascending node. The time between two successive passes of the same ecliptic longitude is called the tropical month. The latter three periods are slightly different from the sidereal month.
The average length of a calendar month (a twelfth of a year) is about 30.4 days. This is not a lunar period, though the calendar month is historically related to the visible lunar phase.
The gravitational attraction that the Moon exerts on Earth is the cause of tides in the sea; the Sun has a lesser tidal influence. If Earth had a global ocean of uniform depth, the Moon would act to deform both the solid Earth (by a small amount) and the ocean in the shape of an ellipsoid with the high points roughly beneath the Moon and on the opposite side of Earth. However, because of the presence of the continents, Earth's much faster rotation and varying ocean depths, this simplistic visualisation does not happen. Although the tidal flow period is generally synchronized to the Moon's orbit around Earth, its relative timing varies greatly. In some places on Earth, there is only one high tide per day, whereas others have four, though this is somewhat rare.
The notional tidal bulges are carried ahead of the Earth–Moon axis by the continents as a result of Earth's rotation. The eccentric mass of each bulge exerts a small amount of gravitational attraction on the Moon, with the bulge on the side of Earth closest to the Moon pulling in a direction slightly forward along the Moon's orbit (because Earth's rotation has carried the bulge forward). The bulge on the side furthest from the Moon has the opposite effect, but because the gravitational attraction varies inversely with the square of distance, the effect is stronger for the near-side bulge. As a result, some of Earth's angular (or rotational) momentum is gradually being transferred to the rotation of the Earth–Moon pair around their mutual centre of mass, called the barycentre. This slightly faster rotation causes the Earth–Moon distance to increase at approximately 38 millimetres per year. Conservation of angular momentum means that Earth's axial rotation is gradually slowing, and because of this its day lengthens by approximately 23 microseconds every year (excluding glacial rebound). Both figures are valid only for the current configuration of the continents. Tidal rhythmites from 620 million years ago show that, over hundreds of millions of years, the Moon receded at an average rate of 22 millimetres per year and the day lengthened at an average rate of 12 microseconds per year, both about half of their current values. See tidal acceleration for a more detailed description and references.
The Moon is gradually receding from Earth into a higher orbit, and calculations suggest that this would continue for about fifty billion years. By that time, Earth and the Moon would be in a mutual spin–orbit resonance or tidal locking, in which the Moon will orbit Earth in about 47 days (currently 27 days), and both the Moon and Earth would rotate around their axes in the same time, always facing each other with the same side. This has already happened to the Moon—the same side always faces Earth and is also slowly happening to Earth. However, the slowdown of Earth's rotation is not occurring fast enough for the rotation to lengthen to a month before other effects change the situation: approximately 2.3 billion years from now, the increase of the Sun's radiation will have caused Earth's oceans to evaporate, removing the bulk of the tidal friction and acceleration.
The Moon is in synchronous rotation, meaning that it keeps the same face toward Earth at all times. This synchronous rotation is only true on average, because the Moon's orbit has a definite eccentricity. As a result, the angular velocity of the Moon varies as it orbits Earth and hence is not always equal to the Moon's rotational velocity. When the Moon is at its perigee, its rotation is slower than its orbital motion, and this allows us to see up to eight degrees of longitude of its eastern (right) far side. Conversely, when the Moon reaches its apogee, its rotation is faster than its orbital motion and this reveals eight degrees of longitude of its western (left) far side. This is referred to as longitudinal libration.
Because the lunar orbit is also inclined to Earth's ecliptic plane by 5.1°, the rotational axis of the Moon seems to rotate towards and away from Earth during one complete orbit. This is referred to as latitudinal libration, which allows one to see almost 7° of latitude beyond the pole on the far side. Finally, because the Moon is only about 60 Earth radii away from Earth's centre of mass, an observer at the equator who observes the Moon throughout the night moves laterally by one Earth diameter. This gives rise to a diurnal libration, which allows one to view an additional one degree's worth of lunar longitude. For the same reason, observers at both of Earth's geographical poles would be able to see one additional degree's worth of libration in latitude.
Path of Earth and Moon around Sun
When viewed from the north celestial pole, i.e. from the star Polaris, the Moon orbits Earth anticlockwise and Earth orbits the Sun anticlockwise, and the Moon and Earth rotate on their own axes anticlockwise.
The right-hand rule can be used to indicate the direction of the angular velocity. If the thumb of the right hand points to the north celestial pole, its fingers curl in the direction that the Moon orbits Earth, Earth orbits the Sun, and the Moon and Earth rotate on their own axes.
In representations of the Solar System, it is common to draw the trajectory of Earth from the point of view of the Sun, and the trajectory of the Moon from the point of view of Earth. This could give the impression that the Moon orbits Earth in such a way that sometimes it goes backwards when viewed from the Sun's perspective. Because the orbital velocity of the Moon around Earth (1 km/s) is small compared to the orbital velocity of Earth about the Sun (30 km/s), this never happens. There are no rearward loops in the Moon's solar orbit.
Considering the Earth–Moon system as a binary planet, its centre of gravity is within Earth, about 4,624 km from its centre or 72.6% of its radius. This centre of gravity remains in-line towards the Moon as Earth completes its diurnal rotation. It is this mutual centre of gravity that defines the path of the Earth–Moon system in the solar orbit. Consequently, Earth's centre veers inside and outside the orbital path during each synodic month as the Moon moves in the opposite direction.
Unlike most moons in the Solar System, the trajectory of the Moon around the Sun is very similar to that of Earth. The Sun's gravitational effect on the Moon is more than twice that of Earth's on the Moon; consequently, the Moon's trajectory is always convex (as seen when looking Sunward at the entire Sun–Earth–Moon system from a great distance outside Earth–Moon solar orbit), and is nowhere concave (from the same perspective) or looped. That is, the region enclosed by the Moon's orbit of the Sun is a convex set.
- The geometric mean distance in the orbit (of ELP)
- M. Chapront-Touzé; J. Chapront (1983). "The lunar ephemeris ELP-2000". Astronomy & Astrophysics 124: 54. Bibcode:1983A&A...124...50C.
- The constant in the ELP expressions for the distance, which is the mean distance averaged over time
- M. Chapront-Touzé; J. Chapront (1988). "ELP2000-85: a semi-analytical lunar ephemeris adequate for historical times". Astronomy & Astrophysics 190: 351. Bibcode:1988A&A...190..342C.
- This often quoted value for the mean distance is actually the inverse of the mean of the inverse of the distance, which is not the same as the mean distance itself.
- Jean Meeus, Mathematical astronomy morsels (Richmond, VA: Willmann-Bell, 1997) 11–12.
- Lang, Kenneth R. (2011), The Cambridge Guide to the Solar System, 2nd ed., Cambridge University Press.
- "Moon Fact Sheet". NASA. Retrieved 2014-01-08.
- Martin C. Gutzwiller (1998). "Moon-Earth-Sun: The oldest three-body problem". Reviews of Modern Physics 70 (2): 589–639. Bibcode:1998RvMP...70..589G. doi:10.1103/RevModPhys.70.589.
- "Moonlight helps plankton escape predators during Arctic winters". New Scientist. Jan 16, 2016.
- The periods are calculated from orbital elements, using the rate of change of quantities at the instant J2000. The J2000 rate of change equals the coefficient of the first-degree term of VSOP polynomials. In the original VSOP87 elements, the units are arcseconds(”) and Julian centuries. There are 1,296,000” in a circle, 36525 days in a Julian century. The sidereal month is the time of a revolution of longitude λ with respect to the fixed J2000 equinox. VSOP87 gives 1732559343.7306” or 1336.8513455 revolutions in 36525 days–27.321661547 days per revolution. The tropical month is similar, but the longitude for the equinox of date is used. For the anomalistic year, the mean anomaly (λ-ω) is used (equinox does not matter). For the draconic month, (λ-Ω) is used. For the synodic month, the sidereal period of the mean Sun (or Earth) and the Moon. The period would be 1/(1/m-1/e). VSOP elements from Simon, J.L.; Bretagnon, P.; Chapront, J.; Chapront-Touzé, M.; Francou, G.; Laskar, J. (February 1994). "Numerical expressions for precession formulae and mean elements for the Moon and planets". Astronomy and Astrophysics 282 (2): 669. Bibcode:1994A&A...282..663S.
- Jean Meeus, Astronomical Algorithms (Richmond, VA: Willmann-Bell, 1998) p 354. From 1900–2100, the shortest time from one new moon to the next is 29 days, 6 hours, and 35 min, and the longest 29 days, 19 hours, and 55 min.
- C.D. Murray; S.F. Dermott (1999). Solar System Dynamics. Cambridge University Press. p. 184.
- Dickinson, Terence (1993). From the Big Bang to Planet X. Camden East, Ontario: Camden House. pp. 79–81. ISBN 0-921820-71-2.
- Caltech Scientists Predict Greater Longevity for Planets with Life
- The reference by H. L. Vacher (2001) (details separately cited in this list) describes this as 'convex outward', whereas older references such as "The Moon's Orbit Around the Sun, Turner, A. B. Journal of the Royal Astronomical Society of Canada, Vol. 6, p. 117, 1912JRASC...6..117T"; and "H Godfray, Elementary Treatise on the Lunar Theory" describe the same geometry by the words concave to the sun.
- Aslaksen, Helmer (2010). "The Orbit of the Moon around the Sun is Convex!". Retrieved 2006-04-21.
- The Moon Always Veers Toward the Sun at MathPages
- Vacher, H.L. (November 2001). "Computational Geology 18 – Definition and the Concept of Set" (PDF). Journal of Geoscience Education 49 (5): 470–479. Retrieved 2006-04-21. | https://en.wikipedia.org/wiki/Moon_orbit |
4.09375 | Nuclear fusion and nuclear fission are different types of reactions that release energy due to the presence of high-powered atomic bonds between particles found within a nucleus. In fission, an atom is split into two or more smaller, lighter atoms. Fusion, in contrast, occurs when two or more smaller atoms fuse together, creating a larger, heavier atom.
|Nuclear Fission||Nuclear Fusion|
|Definition||Fission is the splitting of a large atom into two or more smaller ones.||Fusion is the fusing of two or more lighter atoms into a larger one.|
|Natural occurrence of the process||Fission reaction does not normally occur in nature.||Fusion occurs in stars, such as the sun.|
|Byproducts of the reaction||Fission produces many highly radioactive particles.||Few radioactive particles are produced by fusion reaction, but if a fission "trigger" is used, radioactive particles will result from that.|
|Conditions||Critical mass of the substance and high-speed neutrons are required.||High density, high temperature environment is required.|
|Energy Requirement||Takes little energy to split two atoms in a fission reaction.||Extremely high energy is required to bring two or more protons close enough that nuclear forces overcome their electrostatic repulsion.|
|Energy Released||The energy released by fission is a million times greater than that released in chemical reactions, but lower than the energy released by nuclear fusion.||The energy released by fusion is three to four times greater than the energy released by fission.|
|Nuclear weapon||One class of nuclear weapon is a fission bomb, also known as an atomic bomb or atom bomb.||One class of nuclear weapon is the hydrogen bomb, which uses a fission reaction to "trigger" a fusion reaction.|
|Energy production||Fission is used in nuclear power plants.||Fusion is an experimental technology for producing power.|
|Fuel||Uranium is the primary fuel used in power plants.||Hydrogen isotopes (Deuterium and Tritium) are the primary fuel used in experimental fusion power plants.|
Nuclear fusion is the reaction in which two or more nuclei combine, forming a new element with a higher atomic number (more protons in the nucleus). The energy released in fusion is related to E = mc 2 (Einstein’s famous energy-mass equation). On Earth, the most likely fusion reaction is Deuterium–Tritium reaction. Deuterium and Tritium are isotopes of hydrogen.
2 1Deuterium + 3 1Tritium = 42He + 10n + 17.6 MeV
Nuclear fission is the splitting of a massive nucleus into photons in the form of gamma rays, free neutrons, and other subatomic particles. In a typical nuclear reaction involving 235U and a neutron:
23592U + n = 23692U
23692U = 14456Ba + 89 36Kr + 3n + 177 MeV
Fission vs. Fusion Physics
Atoms are held together by two of the four fundamental forces of nature: the weak and strong nuclear bonds. The total amount of energy held within the bonds of atoms is called binding energy. The more binding energy held within the bonds, the more stable the atom. Moreover, atoms try to become more stable by increasing their binding energy.
The nucleon of an iron atom is the most stable nucleon found in nature, and it neither fuses nor splits. This is why iron is at the top of the binding energy curve. For atomic nuclei lighter than iron and nickel, energy can be extracted by combining iron and nickel nuclei together through nuclear fusion. In contrast, for atomic nuclei heavier than iron or nickel, energy can be released by splitting the heavy nuclei through nuclear fission.
The notion of splitting the atom arose from New Zealand-born British physicist Ernest Rutherford's work, which also led to the discovery of the proton.
Conditions for Fission and Fusion
Fission can only occur in large isotopes that contain more neutrons than protons in their nuclei, which leads to a slightly stable environment. Although scientists don't yet fully understand why this instability is so helpful for fission, the general theory is that the large number of protons create a strong repulsive force between them and that too few or too many neutrons create "gaps" that cause weakening of the nuclear bond, leading to decay (radiation). These large nucleii with more "gaps" can be "split" by the impact of thermal neutrons, so called "slow" neutrons.
Conditions must be right for a fission reaction to occur. For fission to be self-sustaining, the substance must reach critical mass, the minimum amount of mass required; falling short of critical mass limits reaction length to mere microseconds. If critical mass is reached too quickly, meaning too many neutrons are released in nanoseconds, the reaction becomes purely explosive, and no powerful release of energy will occur.
Nuclear reactors are mostly controlled fission systems that use magnetic fields to contain stray neutrons; this creates a roughly 1:1 ratio of neutron release, meaning one neutron emerges from the impact of one neutron. As this number will vary in mathematical proportions, under what is known as Gaussian distribution, the magnetic field must be maintained for the reactor to function, and control rods must be used to slow down or speed up neutron activity.
Fusion happens when two lighter elements are forced together by enormous energy (pressure and heat) until they fuse into another isotope and release energy. The energy needed to start a fusion reaction is so large that it takes an atomic explosion to produce this reaction. Still, once fusion begins, it can theoretically continue to produce energy as long as it is controlled and the basic fusing isotopes are supplied.
The most common form of fusion, which occurs in stars, is called "D-T fusion," referring to two hydrogen isotopes: deuterium and tritium. Deuterium has 2 neutrons and tritium has 3, more than the one proton of hydrogen. This makes the fusion process easier as only the charge between two protons needs to be overcome, because fusing the neutrons and the proton requires overcoming the natural repellent force of like-charged particles (protons have a positive charge, compared to neutrons' lack of charge) and a temperature — for an instant — of close to 81 million degrees Fahrenheit for D-T fusion (45 million Kelvin or slightly less in Celsius). For comparison, the sun's core temperature is roughly 27 million F (15 million C).
Once this temperature is reached, the resulting fusion has to be contained long enough to generate plasma, one of the four states of matter. The result of such containment is a release of energy from the D-T reaction, producing helium (a noble gas, inert to every reaction) and spare neutrons than can "seed" hydrogen for more fusion reactions. At present, there are no secure ways to induce the initial fusion temperature or contain the fusing reaction to achieve a steady plasma state, but efforts are ongoing.
A third type of reactor is called a breeder reactor. It works by using fission to create plutonium that can seed or serve as fuel for other reactors. Breeder reactors are used extensively in France, but are prohibitively expensive and require significant security measures, as the output of these reactors can be used for making nuclear weapons as well.
Fission and fusion nuclear reactions are chain reactions, meaning that one nuclear event causes at least one other nuclear reaction, and typically more. The result is an increasing cycle of reactions that can quickly become uncontrolled. This type of nuclear reaction can be multiple splits of heavy isotopes (e.g. 235 U) or the merging of light isotopes (e.g. 2H and 3H).
Fission chain reactions happen when neutrons bombard unstable isotopes. This type of "impact and scatter" process is difficult to control, but the initial conditions are relatively simple to achieve. A fusion chain reaction develops only under extreme pressure and temperature conditions that remain stable by the energy released in the fusion process. Both the initial conditions and stabilizing fields are very difficult to carry out with current technology.
Fusion reactions release 3-4 times more energy than fission reactions. Although there are no Earth-based fusion systems, the sun's output is typical of fusion energy production in that it constantly converts hydrogen isotopes into helium, emitting spectra of light and heat. Fission generates its energy by breaking down one nuclear force (the strong one) and releasing tremendous amounts of heat than are used to heat water (in a reactor) to then generate energy (electricity). Fusion overcomes 2 nuclear forces (strong and weak), and the energy released can be used directly to power a generator; so not only is more energy released, it can also be harnessed for more direct application.
Nuclear Energy Use
The first experimental nuclear reactor for energy production began operating in Chalk River, Ontario, in 1947. The first nuclear energy facility in the U.S., the Experimental Breeder Reactor-1, was launched shortly thereafter, in 1951; it could light 4 bulbs. Three years later, in 1954, the U.S. launched its first nuclear submarine, the U.S.S. Nautilus, while the U.S.S.R. launched the world's first nuclear reactor for large-scale power generation, in Obninsk. The U.S. inaugurated its nuclear power production facility a year later, lighting up Arco, Idaho (pop. 1,000).
The first commercial facility for energy production using nuclear reactors was the Calder Hall Plant, in Windscale (now Sellafield), Great Britain. It was also the site of the first nuclear-related accident in 1957, when a fire broke out due to radiation leaks.
The first large-scale U.S. nuclear plant opened in Shippingport, Pennsylvania, in 1957. Between 1956 and 1973, nearly 40 power production nuclear reactors were launched in the U.S., the largest being Unit One of the Zion Nuclear Power Station in Illinois, with a capacity of 1,155 megawatts. No other reactors ordered since have come online, though others were launched after 1973.
The French launched their first nuclear reactor, the Phénix, capable of producing 250 megawatts of power, in 1973. The most powerful energy-producing reactor in the U.S. (1,315 MW) opened in 1976, at Trojan Power Plant in Oregon. By 1977, the U.S. had 63 nuclear plants in operation, providing 3% of the nation's energy needs. Another 70 were scheduled to come online by 1990.
Unit Two at Three Mile Island suffered a partial meltdown, releasing inert gases (xenon and krypton) into the environment. The anti-nuclear movement gained strength from the fears the incident caused. Fears were fueled even more in 1985, when Unit 4 at the Chernobyl plant in Ukraine suffered a runaway nuclear reaction that exploded the facility, spreading radioactive material throughout the area and a large part of Europe. During the 1990s, Germany and especially France expanded their nuclear plants, focusing on smaller and thus more controllable reactors. China launched its first 2 nuclear facilities in 2007, producing a total of 1,866 MW.
Although nuclear energy ranks third behind coal and hydropower in global wattage produced, the push to close nuclear plants, coupled with the increasing costs to build and operate such facilities, has created a pull-back on the use of nuclear energy for power. France leads the world in percentage of electricity produced by nuclear reactors, but in Germany, solar has overtaken nuclear as an energy producer.
The U.S. still has over 60 nuclear facilities in operation, but ballot initiatives and reactor ages have closed plants in Oregon and Washington, while dozens more are targeted by protesters and environmental protection groups. At present, only China appears to be expanding its number of nuclear plants, as it seeks to reduce its heavy dependence on coal (the major factor in its extremely high pollution rate) and seek an alternative to importing oil.
The fear of nuclear energy comes from its extremes, as both a weapon and power source. Fission from a reactor creates waste material that is inherently dangerous (see more below) and could be suitable for dirty bombs. Though several countries, such as Germany and France, have excellent track records with their nuclear facilities, other less positive examples, such as those seen in Three Mile Island, Chernobyl, and Fukushima, have made many reluctant to accept nuclear energy, even though it is much safer than fossil fuel. Fusion reactors could one day be the affordable, plentiful energy source that is needed, but only if the extreme conditions needed for creating fusion and managing it can be solved.
The byproduct of fission is radioactive waste that takes thousands of years to lose its dangerous levels of radiation. This means that nuclear fission reactors must also have safeguards for this waste and its transport to uninhabited storage or dump sites. For more information on this, read about the management of radioactive waste.
In nature, fusion occurs in stars, such as the sun. On Earth, nuclear fusion was first achieved in the creation of the hydrogen bomb. Fusion has also been used in different experimental devices, often with the hope of producing energy in a controlled fashion.
On the other hand, fission is a nuclear process that does not normally occur in nature, as it requires a large mass and an incident neutron. Even so, there have been examples of nuclear fission in natural reactors. This was discovered in 1972 when uranium deposits from an Oklo, Gabon, mine were found to have once sustained a natural fission reaction some 2 billion years ago.
In brief, if a fission reaction gets out of control, either it explodes or the reactor generating it melts down into a large pile of radioactive slag. Such explosions or meltdowns release tons of radioactive particles into the air and any neighboring surface (land or water), contaminating it every minute the reaction continues. In contrast, a fusion reaction that loses control (becomes unbalanced) slows down and drops temperature until it stops. This is what happens to stars as they burn their hydrogen into helium and lose these elements over thousands of centuries of expulsion. Fusion produces little radioactive waste. If there is any damage, it will happen to the immediate surroundings of the fusion reactor and little else.
It is far safer to use fusion to produce power, but fission is used because it takes less energy to split two atoms than it does to fuse two atoms. Also, the technical challenges involved in controlling fusion reactions have not been overcome yet.
Use of Nuclear Weapons
All nuclear weapons require a nuclear fission reaction to work, but "pure" fission bombs, those that use a fission reaction alone, are known as atomic, or atom, bombs. Atom bombs were first tested in New Mexico in 1945, during the height of World War II. In the same year, the United States used them as a weapon in Hiroshima and Nagasaki, Japan.
Since the atom bomb, most of the nuclear weapons that have been proposed and/or engineered have enhanced fission reaction(s) in one way or another (e.g., see boosted fission weapon, radiological bombs, and neutron bombs). Thermonuclear weaponry — a weapon that uses both fission and hydrogen-based fusion — is one of the better-known weapon advancements. Though the notion of a thermonuclear weapon was proposed as early as 1941, it was not until the early 1950s that the hydrogen bomb (H-bomb) was first tested. Unlike atom bombs, hydrogen bombs have not been used in warfare, only tested (e.g., see Tsar Bomba).
To date, no nuclear weapon makes use of nuclear fusion alone, though governmental defense programs have put considerable research into such a possibility.
Fission is a powerful form of energy production, but it comes with built-in inefficiencies. The nuclear fuel, usually Uranium-235, is expensive to mine and purify. The fission reaction creates heat that is used to boil water for steam to turn a turbine that generates electricity. This transformation from heat energy to electrical energy is cumbersome and expensive. A third source of inefficiency is that clean-up and storage of nuclear waste is very expensive. Waste is radioactive, requiring proper disposal, and security must be tight to ensure public safety.
For fusion to occur, the atoms must be confined in the magnetic field and raised to a temperature of 100 million Kelvin or more. This takes an enormous amount of energy to initiate fusion (atom bombs and lasers are thought to provide that "spark"), but there's also the need to properly contain the plasma field for long-term energy production. Researchers are still trying to overcome these challenges because fusion a safer and more powerful energy production system than fission, meaning it would ultimately cost less than fission. | http://www.diffen.com/difference/Nuclear_Fission_vs_Nuclear_Fusion |
4.09375 | 1. From birth to 19 years of age, children and young people tend to follow a broad developmental plan. Although children and young people are different, the way they grow and develop is often quite similar. This means we can work out a pattern for development and from this we can pinpoint particular skills or milestones that most children can do at different age ranges. Milestones describe when particular skills are achieved, such as walking, usually achieved by 18 months. These milestones have been draw up by researchers looking at children’s development and working out an average from their recordings. However as children grow older the variations between individuals grow larger. This is especially true when it comes to learning skills such as reading or mathematics, but it is also true in terms of their emotional maturity, this makes it harder to draw up a pattern of development.
Babies at Birth
Most babies are born around the 40th week of pregnancy. Only 3% of babies arrive exactly on time. Some are a week early or a week late. Babies who are born earlier than the 37th week are known as premature. Premature babies are likely to need more time to reach the same developmental targets as babies born around the 40th week. Many people think that babies are helpless when they are born, but in reality they are born with the ability to do a quite a few things. They can recognise their mothers voice and smell. They are able to cry to let everyone they need help. They also actively learn about their new world through their senses, particularly touch, taste and sound.
These are things you may expect to observe in a new born:-
Reflexes babies are born with many reflexes, which are actions they do without thinking. Many reflexes are linked to survival. Here are some examples of these reflexes:- • Swallowing and sucking reflexes these ensure that the baby can feed and swallow milk. • Rooting reflex the baby will move its head to look for a nipple or teat. • Grasp reflex the baby will automatically put its fingers around an object that has touched the palm of its hand. • Startle reflex when babies hear a sudden sound or bright light, they will react by moving their arms outwards and clenching their fists. • Walking and standing reflex when babies are held upright with their feet on a firm surface, they usually make stepping movements. • Falling reflex this is known as the ‘Moro reflex’. Babies will stretch out their arms suddenly and then clasp inwards in any situations in which they feel as if they are falling.
Communication and Intellectual Development
Babies at birth cry in order to communicate their needs. They also begin to look around and react to sounds.
Social, Emotional and behavioural Development
Babies and their primary carers, usually their mothers, begin to develop a strong, close bond from very early on. You might see the baby at times stares at the mother and the mother is very aware of her baby.
Babies at one month
In a short month, babies have changed already. They may appeared less curled up and more relaxed. Babies at one month have usually settled in to a pattern. They sleep quite a lot of the time, but will gradually start to spend longer time awake. They cry to communicate their needs and their parents may be starting to understand the different types of cries. Babies too are learning about their parents or carers. They may stop crying when they hear soothing voices. They also try hard to focus on the face of whoever is holding them.
These are things you may expect to observe in a baby at 1 month:-
Some reflexes are not as strong as at birth.
Communication and Intellectual Development... | http://www.studymode.com/essays/Main-Stages-Of-Child-Development-From-796019.html |
4.3125 | Registers could potentially be the most important part of a computer. A register temporarily stores a value during the operation of a computer. The 8-bit computer described in this Instructable has two registers attached to its ALU, a register to store the current instruction and a register for the output of the computer.
Depending on the chip, a register will have 2 or 3 control pins. The registers that we will be using have two control pins: output enable and input enable (both active when low). When the output enable pin is connected to ground the currently stored binary word is sent out across the output pins. When the input pin is connected to ground the binary word present on the input pins is loaded into the register.
An example of the use of a register on a computer is the accumulator on the ALU (arithmetic logic unit that performs mathematical operations). The accumulator is like the scratchpad for the computer that stores the output of the ALU. The accumulator is also the first input for the ALU. The B register is the second input. For an addition operation, the first value is loaded into the accumulator. After that the second value to be added to the first value is loaded into the B register. The outputs of the accumulator and B register are fused open and are constantly feeding into the ALU. The final step for addition is to transfer the output of the operation into the Accumulator.
Registers all operate on a shared data line called the bus. The bus is a group of wires equal in number to the architecture of any CPU. This is really putting the horse before the cart considering bus width is the defining measurement for CPU architecture. Since a digital 1 means positive voltage, and a 0 means grounding, it would be impossible to have all registers share the same bus without giving them the ability to selectively connect and disconnect from the bus. Luckily for us, there is a third state between 1 and 0 that is ambivalent to current imput that works great for this. Enter the tri-state buffer: a chip that allows you to selectively connect groups of wires to a bus. Using some of these tri-state-buffers, you can have every register and chip on the entire computer needing of communication share the same wires as a bus. In the case of my computer, it was an 8-wire wide band of breadboard slots that spanned the bottom pins of the breadboard. Experiment around with busses, since they carry all of the information from piece to piece in the computer a faulty buss could mean erroneous data that ripples down the line.
The great thing about building an 8-bit computer is that most parts will cost you less than a dollar a piece if you buy them from the correct place. I purchased 90% of my parts from Jameco Electronics and I have been completely satisfied with their services. The only parts I have really bought from anywhere else are the breadboards and breadboard wires (and the Numitron tubes). These can be found considerably cheaper on sites like Amazon. Always be sure to make sure the parts that you are ordering are the correct ones. Every part that you buy should have a datasheet available online that explains all of the functions and limitations of the item that you are buying. Make sure to keep these organized as you will be using many datasheets in the construction of your computer. To help you with your computer I will list the parts that I used for mine:
74161 - http://www.jameco.com/webapp/wcs/stores/servlet/ProductDisplay?freeText=74161&langId=-1&storeId=10001&productId=49664&search_type=jamecoall&catalogId=10001&ddkey=http:StoreCatalogDrillDownView
4-Bit Register (I use two for each 8-bit register):
74LS173 - http://www.jameco.com/webapp/wcs/stores/servlet/ProductDisplay?freeText=74LS173&langId=-1&storeId=10001&productId=46922&search_type=jamecoall&catalogId=10001&ddkey=http:StoreCatalogDrillDownView
74LS157 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_46771_-1
16x8 RAM (output needs to be inverted):
74189 - http://www.jameco.com/webapp/wcs/stores/servlet/ProductDisplay?freeText=74189&langId=-1&storeId=10001&productId=49883&search_type=jamecoall&catalogId=10001&ddkey=http:StoreCatalogDrillDownView
74LS283 - http://www.jameco.com/webapp/wcs/stores/servlet/ProductDisplay?freeText=74LS283&langId=-1&storeId=10001&productId=47423&search_type=all&catalogId=10001&ddkey=http:StoreCatalogDrillDownView
74S244 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_910750_-1
74LS86 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_295751_-1
74LS08 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_295401_-1
74LS02 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_283741_-1
74LS04 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_283792_-1
CD4029 - http://www.jameco.com/webapp/wcs/stores/servlet/ProductDisplay?freeText=4029&langId=-1&storeId=10001&productId=12925&search_type=jamecoall&catalogId=10001&ddkey=http:StoreCatalogDrillDownView
74LS10 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_295427_-1 | http://www.instructables.com/id/How-to-Build-an-8-Bit-Computer/?ALLSTEPS |
4.46875 | Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.
Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.
Reptiles originated approximately 300 million years ago during the Carboniferous period. One of the oldest-known amniotes is Casineria, which had both amphibian and reptilian characteristics. One of the earliest undisputed reptiles was Hylonomus. Soon after the first amniotes appeared, they diverged into three groups (synapsids, anapsids, and diapsids) during the Permian period. The Permian period also saw a second major divergence of diapsid reptiles into archosaurs (predecessors of crocodilians and dinosaurs) and lepidosaurs (predecessors of snakes and lizards). These groups remained inconspicuous until the Triassic period when the archosaurs became the dominant terrestrial group due to the extinction of large-bodied anapsids and synapsids during the Permian-Triassic extinction. About 250 million years ago, archosaurs radiated into the dinosaurs and the pterosaurs.
Although they are sometimes mistakenly called dinosaurs, the pterosaurs were distinct from true dinosaurs . Pterosaurs had a number of adaptations that allowed for flight, including hollow bones (birds also exhibit hollow bones, a case of convergent evolution). Their wings were formed by membranes of skin that attached to the long, fourth finger of each arm and extended along the body to the legs.
The dinosaurs were a diverse group of terrestrial reptiles with more than 1,000 species identified to date. Paleontologists continue to discover new species of dinosaurs. Some dinosaurs were quadrupeds; others were bipeds . Some were carnivorous, whereas others were herbivorous. Dinosaurs laid eggs; a number of nests containing fossilized eggs have been found. It is not known whether dinosaurs were endotherms or ectotherms. However, given that modern birds are endothermic, the dinosaurs that served as ancestors to birds were probably endothermic as well. Some fossil evidence exists for dinosaurian parental care. Comparative biology supports this hypothesis since the archosaur birds and crocodilians display parental care.
Dinosaurs dominated the Mesozoic Era, which was known as the "Age of Reptiles." The dominance of dinosaurs lasted until the end of the Cretaceous period, the end of the Mesozoic Era. The Cretaceous-Tertiary extinction resulted in the loss of most of the large-bodied animals of the Mesozoic Era. Birds are the only living descendants of one of the major clades of dinosaurs.
Dinosaurs dominated the earth until the Permian-Triassic extinction., Dinosaurs and pterosaurs evolved from archosaurs., Pterosaurs, reptiles with wings, evolved from dinosaurs., or Lizards evolved from dinosaurs during the Permian period. | https://www.boundless.com/biology/textbooks/boundless-biology-textbook/vertebrates-29/reptiles-174/evolution-of-reptiles-673-11895/ |
4.75 | After the sun spun to light, the planets of the solar system began to form. But it took another hundred million years for Earth's moon to spring into existence. There are three theories as to how our planet's satellite could have been created: the giant impact hypothesis, the co-formation theory and the capture theory.
Giant impact hypothesis
This is the prevailing theory supported by the scientific community. Like the other planets, the Earth formed from the leftover cloud of dust and gas orbiting the young sun. The early solar system was a violent place, and a number of bodies were created that never made it to full planetary status. According to the giant impact hypothesis, one of these crashed into Earth not long after the young planet was created.
Known as Theia, the Mars-size body collided with Earth, throwing vaporized chunks of the young planet's crust into space. Gravity bound the ejected particles together, creating a moon that is the largest in the solar system in relation to its host planet. This sort of formation would explain why the moon is made up predominantly of lighter elements, making it less dense than Earth — the material that formed it came from the crust, while leaving the planet's rocky core untouched. As the material drew together around what was left of Theia's core, it would have centered near Earth's ecliptic plane, the path the sun travels through the sky, which is where the moon orbits today.
Moons can also form at the same time as their parent planet. Under such an explanation, gravity would have caused material in the early solar system to draw together at the same time as gravity bound particles together to form Earth. Such a moon would have a very similar composition to the planet, and would explain the moon's present location. However, although Earth and the moon share much of the same material, the moon is much less dense than our planet, which would likely not be the case if both started with the same heavy elements at their core.
Perhaps Earth's gravity snagged a passing body, as happened with other moons in the solar system, such as the Martian moons of Phobos and Deimos. Under the capture theory, a rocky body formed elsewhere in the solar system could have been drawn into orbit around the Earth. The capture theory would explain the differences in the composition of the Earth and its moon. However, such orbiters are often oddly shaped, rather than being spherical bodies like the moon. Their paths don't tend to line up with the ecliptic of their parent planet, also unlike the moon.
Although the co-formation theory and the capture theory both explain some elements of the existence of the moon, they leave many questions unanswered. At present, the giant impact hypothesis seems to cover many of these questions, making it the best model to fit the scientific evidence for how the moon was created. | http://www.space.com/19275-moon-formation.html |
4.09375 | The Rankine cycle is the fundamental operating cycle of all power plants where an operating fluid is continuously evaporated and condensed. The selection of operating fluid depends mainly on the available temperature range. Figure 1 shows the idealized Rankine cycle.
The pressure-enthalpy (p-h) and temperature-entropy (T-s) diagrams of this cycle are given in Figure 2. The Rankine cycle operates in the following steps:
1-2-3 Isobaric Heat Transfer. High pressure liquid enters the boiler from the feed pump (1) and is heated to the saturation temperature (2). Further addition of energy causes evaporation of the liquid until it is fully converted to saturated steam (3).
3-4 Isentropic Expansion. The vapor is expanded in the turbine, thus producing work which may be converted to electricity. In practice, the expansion is limited by the temperature of the cooling medium and by the erosion of the turbine blades by liquid entrainment in the vapor stream as the process moves further into the two-phase region. Exit vapor qualities should be greater than 90%.
4-5 Isobaric Heat Rejection. The vapor-liquid mixture leaving the turbine (4) is condensed at low pressure, usually in a surface condenser using cooling water. In well designed and maintained condensers, the pressure of the vapor is well below atmospheric pressure, approaching the saturation pressure of the operating fluid at the cooling water temperature.
5-1 Isentropic Compression. The pressure of the condensate is raised in the feed pump. Because of the low specific volume of liquids, the pump work is relatively small and often neglected in thermodynamic calculations.
The efficiency of power cycles is defined as
Values of heat and work can be determined by applying the First Law of Thermodynamics to each step. The steam quality x at the turbine outlet is determined from the assumption of isentropic expansion, i.e.,
where is the entropy of vapor and Si* the entropy of liquid.
The efficiency of the ideal Rankine cycle as described in the previous section is close to the Carnot efficiency (see Carnot Cycle). In real plants, each stage of the Rankine cycle is associated with irreversible processes, reducing the overall efficiency. Turbine and pump irreversibilities can be included in the calculation of the overall cycle efficiency by defining a turbine efficiency according to Figure 3
where subscript act indicates actual values and subscript is indicates isentropic values and a pump efficiency
If ηt and ηp are known, the actual enthalpy after the compression and expansion steps can be determined from the values for the isentropic processes. The turbine efficiency directly reduces the work produced in the turbine and, therefore the overall efficiency. The inefficiency of the pump increases the enthalpy of the liquid leaving the pump and, therefore, reduces the amount of energy required to evaporate the liquid. However, the energy to drive the pump is usually more expensive than the energy to feed the boiler.
Even the most sophisticated boilers transform only 40% of the fuel energy into useable steam energy. There are two main reasons for this wastage:
The combustion gas temperatures are between 1000°C and 2000°C, which is considerably higher than the highest vapor temperatures. The transfer of heat across a large temperature difference increases the entropy.
Combustion (oxidation) at technically feasible temperatures is highly irreversible.
Since the heat transfer surface in the condenser has a finite value, the condensation will occur at a temperature higher than the temperature of the cooling medium. Again, heat transfer occurs across a temperature difference, causing the generation of entropy. The deposition of dirt in condensers during operation with cooling water reduces the efficiency.
The net work produced in the Rankine cycle is represented by the area of the cycle process in Figure 2. Obviously, this area can be increased by increasing the pressure in the boiler and reducing the pressure in the condenser.
The irreversibility of any process is reduced if it is performed as close as possible to the temperatures of the high temperature and low temperature reservoirs. This is achieved by operating the condenser at subatmospheric pressure. The temperature in the boiler is limited by the saturation pressure. Further increase in temperature is possible by superheating the saturated vapor, see Figure 4.
This has the additional advantage that the vapor quality after the turbine is increased and, therefore the erosion of the turbine blades is reduced. It is quite common to reheat the vapor after expansion in the high pressure turbine and expand the reheated vapor in a second, low pressure turbine.
The cold liquid leaving the feed pump is mixed with the saturated liquid in the boiler and/or re-heated to the boiling temperature. The resulting irreversibility reduces the efficiency of the boiler. According to the Carnot process, the highest efficiency is reached if heat transfer occurs isothermally. To preheat the feed liquid to its saturation temperature, bleed vapor from various positions of the turbine is passed through external heat exchangers (regenerators), as shown in Figure 5.
Ideally, the temperature of the bleed steam should be as close as possible to the temperature of the feed liquid.
The high combustion temperature of the fuel is better utilized if a gas turbine or Brayton engine is used as "topping cycle" in conjunction with a Rankine cycle. In this case, the hot gas leaving the turbine is used to provide the energy input to the boiler. In co-generation systems, the energy rejected by the Rankine cycle is used for space heating, process steam or other low temperature applications. | http://www.thermopedia.com/content/1072/ |
4.09375 | A black American writer, J. Saunders Redding, describes the arrival of a ship in North America in the year 1619:
Sails furled, flag drooping at her rounded stern, she rode the tide in from the sea. She was a strange ship, indeed, by all accounts, a frightening ship, a ship of mystery. Whether she was trader, privateer, or man-of-war no one knows. Through her bulwarks black-mouthed cannon yawned. The flag she flew was Dutch; her crew a motley. Her port of call, an English settlement, Jamestown, in the colony of Virginia. She came, she traded, and shortly afterwards was gone. Probably no ship in modern history has carried a more portentous freight. Her cargo? Twenty slaves.
There is not a country in world history in which racism has been more important, for so long a time, as the United States. And the problem of "the color line," as W. E. B. Du Bois put it, is still with us. So it is more than a purely historical question to ask: How does it start?—and an even more urgent question: How might it end? Or, to put it differently: Is it possible for whites and blacks to live together without hatred?
If history can help answer these questions, then the beginnings of slavery in North America—a continent where we can trace the coming of the first whites and the first blacks—might supply at least a few clues.
Some historians think those first blacks in Virginia were considered as servants, like the white indentured servants brought from Europe. But the strong probability is that, even if they were listed as "servants" (a more familiar category to the English), they were viewed as being different from white servants, were treated differently, and in fact were slaves. In any case, slavery developed quickly into a regular institution, into the normal labor relation of blacks to whites in the New World. With it developed that special racial feeling—whether hatred, or contempt, or pity, or patronization—that accompanied the inferior position of blacks in America for the next 350 years —that combination of inferior status and derogatory thought we call racism.
Everything in the experience of the first white settlers acted as a pressure for the enslavement of blacks.
The Virginians of 1619 were desperate for labor, to grow enough food to stay alive. Among them were survivors from the winter of 1609-1610, the "starving time," when, crazed for want of food, they roamed the woods for nuts and berries, dug up graves to eat the corpses, and died in batches until five hundred colonists were reduced to sixty.
In the Journals of the House of Burgesses of Virginia is a document of 1619 which tells of the first twelve years of the Jamestown colony. The first settlement had a hundred persons, who had one small ladle of barley per meal. When more people arrived, there was even less food. Many of the people lived in cavelike holes dug into the ground, and in the winter of 1609-1610, they were
...driven through insufferable hunger to eat those things which nature most abhorred, the flesh and excrements of man as well of our own nation as of an Indian, digged by some out of his grave after he had laid buried there days and wholly devoured him; others, envying the better state of body of any whom hunger has not yet so much wasted as their own, lay wait and threatened to kill and eat them; one among them slew his wife as she slept in his bosom, cut her in pieces, salted her and fed upon her till he had clean devoured all parts saving her head...
A petition by thirty colonists to the House of Burgesses, complaining against the twelve-year governorship of Sir Thomas Smith, said:
In those 12 years of Sir Thomas Smith, his government, we aver that the colony for the most part remained in great want and misery under most severe and cruel laws... The allowance in those times for a man was only eight ounces of meale and half a pint of peas for a day... mouldy, rotten, full of cobwebs and maggots, loathsome to man and not fit for beasts, which forced many to flee for relief to the savage enemy, who being taken again were put to sundry deaths as by hanging, shooting and breaking upon the wheel... of whom one for stealing two or three pints of oatmeal had a bodkin thrust through his tongue and was tied with a chain to a tree until he starved...
The Virginians needed labor, to grow corn for subsistence, to grow tobacco for export. They had just figured out how to grow tobacco, and in 1617 they sent off the first cargo to England. Finding that, like all pleasureable drugs tainted with moral disapproval, it brought a high price, the planters, despite their high religious talk, were not going to ask questions about something so profitable.
They couldn't force the Indians to work for them, as Columbus had done. They were outnumbered, and while, with superior firearms, they could massacre Indians, they would face massacre in return. They could not capture them and keep them enslaved; the Indians were tough, resourceful, defiant, and at home in these woods, as the transplanted Englishmen were not.
White servants had not yet been brought over in sufficient quantity. Besides, they did not come out of slavery, and did not have to do more than contract their labor for a few years to get their passage and a start in the New World. As for the free white settlers, many of them were skilled craftsmen, or even men of leisure back in England, who were so little inclined to work the land that John Smith, in those early years, had to declare a kind of martial law, organize them into work gangs, and force them into the fields for survival.
There may have been a kind of frustrated rage at their own ineptitude, at the Indian superiority at taking care of themselves, that made the Virginians especially ready to become the masters of slaves. Edmund Morgan imagines their mood as he writes in his book American Slavery, American Freedom:
If you were a colonist, you knew that your technology was superior to the Indians'. You knew that you were civilized, and they were savages... But your superior technology had proved insufficient to extract anything. The Indians, keeping to themselves, laughed at your superior methods and lived from the land more abundantly and with less labor than you did... And when your own people started deserting in order to live with them, it was too much... So you killed the Indians, tortured them, burned their villages, burned their cornfields. It proved your superiority, in spite of your failures. And you gave similar treatment to any of your own people who succumbed to their savage ways of life. But you still did not grow much corn...
Black slaves were the answer. And it was natural to consider imported blacks as slaves, even if the institution of slavery would not be regularized and legalized for several decades. Because, by 1619, a million blacks had already been brought from Africa to South America and the Caribbean, to the Portuguese and Spanish colonies, to work as slaves. Fifty years before Columbus, the Portuguese took ten African blacks to Lisbon—this was the start of a regular trade in slaves. African blacks had been stamped as slave labor for a hundred years. So it would have been strange if those twenty blacks, forcibly transported to Jamestown, and sold as objects to settlers anxious for a steadfast source of labor, were considered as anything but slaves.
Their helplessness made enslavement easier. The Indians were on their own land. The whites were in their own European culture. The blacks had been torn from their land and culture, forced into a situation where the heritage of language, dress, custom, family relations, was bit by bit obliterated except for remnants that blacks could hold on to by sheer, extraordinary persistence.
Was their culture inferior—and so subject to easy destruction? Inferior in military capability, yes —vulnerable to whites with guns and ships. But in no other way—except that cultures that are different are often taken as inferior, especially when such a judgment is practical and profitable. Even militarily, while the Westerners could secure forts on the African coast, they were unable to subdue the interior and had to come to terms with its chiefs.
The African civilization was as advanced in its own way as that of Europe. In certain ways, it was more admirable; but it also included cruelties, hierarchical privilege, and the readiness to sacrifice human lives for religion or profit. It was a civilization of 100 million people, using iron implements and skilled in farming. It had large urban centers and remarkable achievements in weaving, ceramics, sculpture.
European travelers in the sixteenth century were impressed with the African kingdoms of Timbuktu and Mali, already stable and organized at a time when European states were just beginning to develop into the modern nation. In 1563, Ramusio, secretary to the rulers in Venice, wrote to the Italian merchants: "Let them go and do business with the King of Timbuktu and Mali and there is no doubt that they will be well-received there with their ships and their goods and treated well, and granted the favours that they ask..."
A Dutch report, around 1602, on the West African kingdom of Benin, said: "The Towne seemeth to be very great, when you enter it. You go into a great broad street, not paved, which seemeth to be seven or eight times broader than the Warmoes Street in Amsterdam. ...The Houses in this Towne stand in good order, one close and even with the other, as the Houses in Holland stand."
The inhabitants of the Guinea Coast were described by one traveler around 1680 as "very civil and good-natured people, easy to be dealt with, condescending to what Europeans require of them in a civil way, and very ready to return double the presents we make them."
Africa had a kind of feudalism, like Europe based on agriculture, and with hierarchies of lords and vassals. But African feudalism did not come, as did Europe's, out of the slave societies of Greece and Rome, which had destroyed ancient tribal life. In Africa, tribal life was still powerful, and some of its better features—a communal spirit, more kindness in law and punishment—still existed. And because the lords did not have the weapons that European lords had, they could not command obedience as easily.
In his book The African Slave Trade, Basil Davidson contrasts law in the Congo in the early sixteenth century with law in Portugal and England. In those European countries, where the idea of private property was becoming powerful, theft was punished brutally. In England, even as late as 1740, a child could be hanged for stealing a rag of cotton. But in the Congo, communal life persisted, the idea of private property was a strange one, and thefts were punished with fines or various degrees of servitude. A Congolese leader, told of the Portuguese legal codes, asked a Portuguese once, teasingly: "What is the penalty in Portugal for anyone who puts his feet on the ground?"
Slavery existed in the African states, and it was sometimes used by Europeans to justify their own slave trade. But, as Davidson points out, the "slaves" of Africa were more like the serfs of Europe —in other words, like most of the population of Europe. It was a harsh servitude, but they had rights which slaves brought to America did not have, and they were "altogether different from the human cattle of the slave ships and the American plantations." In the Ashanti Kingdom of West Africa, one observer noted that "a slave might marry; own property; himself own a slave; swear an oath; be a competent witness and ultimately become heir to his master... An Ashanti slave, nine cases out of ten, possibly became an adopted member of the family, and in time his descendants so merged and intermarried with the owner's kinsmen that only a few would know their origin."
One slave trader, John Newton (who later became an antislavery leader), wrote about the people of what is now Sierra Leone:
The state of slavery, among these wild barbarous people, as we esteem them, is much milder than in our colonies. For as, on the one hand, they have no land in high cultivation, like our West India plantations, and therefore no call for that excessive, unintermitted labour, which exhausts our slaves: so, on the other hand, no man is permitted to draw blood even from a slave.
African slavery is hardly to be praised. But it was far different from plantation or mining slavery in the Americas, which was lifelong, morally crippling, destructive of family ties, without hope of any future. African slavery lacked two elements that made American slavery the most cruel form of slavery in history: the frenzy for limitless profit that comes from capitalistic agriculture; the reduction of the slave to less than human status by the use of racial hatred, with that relentless clarity based on color, where white was master, black was slave.
In fact, it was because they came from a settled culture, of tribal customs and family ties, of communal life and traditional ritual, that African blacks found themselves especially helpless when removed from this. They were captured in the interior (frequently by blacks caught up in the slave trade themselves), sold on the coast, then shoved into pens with blacks of other tribes, often speaking different languages.
The conditions of capture and sale were crushing affirmations to the black African of his helplessness in the face of superior force. The marches to the coast, sometimes for 1,000 miles, with people shackled around the neck, under whip and gun, were death marches, in which two of every five blacks died. On the coast, they were kept in cages until they were picked and sold. One John Barbot, at the end of the seventeenth century, described these cages on the Gold Coast:
As the slaves come down to Fida from the inland country, they are put into a booth or prison... near the beach, and when the Europeans are to receive them, they are brought out onto a large plain, where the ship's surgeons examine every part of everyone of them, to the smallest member, men and women being stark naked... Such as are allowed good and sound are set on one side... marked on the breast with a red- hot iron, imprinting the mark of the French, English or Dutch companies... The branded slaves after this are returned to their former booths where they await shipment, sometimes 10-15 days...
Then they were packed aboard the slave ships, in spaces not much bigger than coffins, chained together in the dark, wet slime of the ship's bottom, choking in the stench of their own excrement. Documents of the time describe the conditions:
The height, sometimes, between decks, was only eighteen inches; so that the unfortunate human beings could not turn around, or even on their sides, the elevation being less than the breadth of their shoulders; and here they are usually chained to the decks by the neck and legs. In such a place the sense of misery and suffocation is so great, that the Negroes... are driven to frenzy.
On one occasion, hearing a great noise from belowdecks where the blacks were chained together, the sailors opened the hatches and found the slaves in different stages of suffocation, many dead, some having killed others in desperate attempts to breathe. Slaves often jumped overboard to drown rather than continue their suffering. To one observer a slave-deck was "so covered with blood and mucus that it resembled a slaughter house."
Under these conditions, perhaps one of every three blacks transported overseas died, but the huge profits (often double the investment on one trip) made it worthwhile for the slave trader, and so the blacks were packed into the holds like fish.
First the Dutch, then the English, dominated the slave trade. (By 1795 Liverpool had more than a hundred ships carrying slaves and accounted for half of all the European slave trade.) Some Americans in New England entered the business, and in 1637 the first American slave ship, the Desire, sailed from Marblehead. Its holds were partitioned into racks, 2 feet by 6 feet, with leg irons and bars.
By 1800, 10 to 15 million blacks had been transported as slaves to the Americas, representing perhaps one-third of those originally seized in Africa. It is roughly estimated that Africa lost 50 million human beings to death and slavery in those centuries we call the beginnings of modern Western civilization, at the hands of slave traders and plantation owners in Western Europe and America, the countries deemed the most advanced in the world.
In the year 1610, a Catholic priest in the Americas named Father Sandoval wrote back to a church functionary in Europe to ask if the capture, transport, and enslavement of African blacks was legal by church doctrine. A letter dated March 12, 1610, from Brother Luis Brandaon to Father Sandoval gives the answer:
Your Reverence writes me that you would like to know whether the Negroes who are sent to your parts have been legally captured. To this I reply that I think your Reverence should have no scruples on this point, because this is a matter which has been questioned by the Board of Conscience in Lisbon, and all its members are learned and conscientious men. Nor did the bishops who were in SaoThome, Cape Verde, and here in Loando—all learned and virtuous men—find fault with it. We have been here ourselves for forty years and there have been among us very learned Fathers... never did they consider the trade as illicit. Therefore we and the Fathers of Brazil buy these slaves for our service without any scruple...
With all of this—the desperation of the Jamestown settlers for labor, the impossibility of using Indians and the difficulty of using whites, the availability of blacks offered in greater and greater numbers by profit-seeking dealers in human flesh, and with such blacks possible to control because they had just gone through an ordeal which if it did not kill them must have left them in a state of psychic and physical helplessness—is it any wonder that such blacks were ripe for enslavement?
And under these conditions, even if some blacks might have been considered servants, would blacks be treated the same as white servants?
The evidence, from the court records of colonial Virginia, shows that in 1630 a white man named Hugh Davis was ordered "to be soundly whipt... for abusing himself... by defiling his body in lying with a Negro." Ten years later, six servants and "a negro of Mr. Reynolds" started to run away. While the whites received lighter sentences, "Emanuel the Negro to receive thirty stripes and to be burnt in the cheek with the letter R, and to work in shackle one year or more as his master shall see cause."
Although slavery was not yet regularized or legalized in those first years, the lists of servants show blacks listed separately. A law passed in 1639 decreed that "all persons except Negroes" were to get arms and ammunition—probably to fight off Indians. When in 1640 three servants tried to run away, the two whites were punished with a lengthening of their service. But, as the court put it, "the third being a negro named John Punch shall serve his master or his assigns for the time of his natural life." Also in 1640, we have the case of a Negro woman servant who begot a child by Robert Sweat, a white man. The court ruled "that the said negro woman shall be whipt at the whipping post and the said Sweat shall tomorrow in the forenoon do public penance for his offense at James citychurch..."
This unequal treatment, this developing combination of contempt and oppression, feeling and action, which we call "racism"—was this the result of a "natural" antipathy of white against black? The question is important, not just as a matter of historical accuracy, but because any emphasis on "natural" racism lightens the responsibility of the social system. If racism can't be shown to be natural, then it is the result of certain conditions, and we are impelled to eliminate those conditions.
We have no way of testing the behavior of whites and blacks toward one another under favorable conditions—with no history of subordination, no money incentive for exploitation and enslavement, no desperation for survival requiring forced labor. All the conditions for black and white in seventeenth-century America were the opposite of that, all powerfully directed toward antagonism and mistreatment. Under such conditions even the slightest display of humanity between the races might be considered evidence of a basic human drive toward community.
Sometimes it is noted that, even before 1600, when the slave trade had just begun, before Africans were stamped by it—literally and symbolically—the color black was distasteful. In England, before 1600, it meant, according to the Oxford English Dictionary: "Deeply stained with dirt; soiled, dirty, foul. Having dark or deadly purposes, malignant; pertaining to or involving death, deadly; baneful, disastrous, sinister. Foul, iniquitous, atrocious, horribly wicked. Indicating disgrace, censure, liability to punishment, etc." And Elizabethan poetry often used the color white in connection with beauty.
It may be that, in the absence of any other overriding factor, darkness and blackness, associated with night and unknown, would take on those meanings. But the presence of another human being is a powerful fact, and the conditions of that presence are crucial in determining whether an initial prejudice, against a mere color, divorced from humankind, is turned into brutality and hatred.
In spite of such preconceptions about blackness, in spite of special subordination of blacks in the Americas in the seventeenth century, there is evidence that where whites and blacks found themselves with common problems, common work, common enemy in their master, they behaved toward one another as equals. As one scholar of slavery, Kenneth Stampp, has put it, Negro and white servants of the seventeenth century were "remarkably unconcerned about the visible physical differences."
Black and white worked together, fraternized together. The very fact that laws had to be passed after a while to forbid such relations indicates the strength of that tendency. In 1661 a law was passed in Virginia that "in case any English servant shall run away in company of any Negroes" he would have to give special service for extra years to the master of the runaway Negro. In 1691, Virginia provided for the banishment of any "white man or woman being free who shall intermarry with a negro, mulatoo, or Indian man or woman bond or free."
There is an enormous difference between a feeling of racial strangeness, perhaps fear, and the mass enslavement of millions of black people that took place in the Americas. The transition from one to the other cannot be explained easily by "natural" tendencies. It is not hard to understand as the outcome of historical conditions.
Slavery grew as the plantation system grew. The reason is easily traceable to something other than natural racial repugnance: the number of arriving whites, whether free or indentured servants (under four to seven years contract), was not enough to meet the need of the plantations. By 1700, in Virginia, there were 6,000 slaves, one-twelfth of the population. By 1763, there were 170,000 slaves, about half the population.
Blacks were easier to enslave than whites or Indians. But they were still not easy to enslave. From the beginning, the imported black men and women resisted their enslavement. Ultimately their resistance was controlled, and slavery was established for 3 million blacks in the South. Still, under the most difficult conditions, under pain of mutilation and death, throughout their two hundred years of enslavement in North America, these Afro-Americans continued to rebel. Only occasionally was there an organized insurrection. More often they showed their refusal to submit by running away. Even more often, they engaged in sabotage, slowdowns, and subtle forms of resistance which asserted, if only to themselves and their brothers and sisters, their dignity as human beings.
The refusal began in Africa. One slave trader reported that Negroes were "so wilful and loth to leave their own country, that they have often leap'd out of the canoes, boat and ship into the sea, and kept under water til they were drowned."
When the very first black slaves were brought into Hispaniola in 1503, the Spanish governor of Hispaniola complained to the Spanish court that fugitive Negro slaves were teaching disobedience to the Indians. In the 1520s and 1530s, there were slave revolts in Hispaniola, Puerto Rico, Santa Marta, and what is now Panama. Shortly after those rebellions, the Spanish established a special police for chasing fugitive slaves.
A Virginia statute of 1669 referred to "the obstinacy of many of them," and in 1680 the Assembly took note of slave meetings "under the pretense of feasts and brawls" which they considered of "dangerous consequence." In 1687, in the colony's Northern Neck, a plot was discovered in which slaves planned to kill all the whites in the area and escape during a mass funeral.
Gerald Mullin, who studied slave resistance in eighteenth-century Virginia in his work Flight and Rebellion, reports:
The available sources on slavery in 18th-century Virginia—plantation and county records, the newspaper advertisements for runaways—describe rebellious slaves and few others. The slaves described were lazy and thieving; they feigned illnesses, destroyed crops, stores, tools, and sometimes attacked or killed overseers. They operated blackmarkets in stolen goods. Runaways were defined as various types, they were truants (who usually returned voluntarily), "outlaws"... and slaves who were actually fugitives: men who visited relatives, went to town to pass as free, or tried to escape slavery completely, either by boarding ships and leaving the colony, or banding together in cooperative efforts to establish villages or hide-outs in the frontier. The commitment of another type of rebellious slave was total; these men became killers, arsonists, and insurrectionists.
Slaves recently from Africa, still holding on to the heritage of their communal society, would run away in groups and try to establish villages of runaways out in the wilderness, on the frontier. Slaves born in America, on the other hand, were more likely to run off alone, and, with the skills they had learned on the plantation, try to pass as free men.
In the colonial papers of England, a 1729 report from the lieutenant governor of Virginia to the British Board of Trade tells how "a number of Negroes, about fifteen... formed a design to withdraw from their Master and to fix themselves in the fastnesses of the neighboring Mountains. They had found means to get into their possession some Arms and Ammunition, and they took along with them some Provisions, their Cloths, bedding and working Tools... Tho' this attempt has happily been defeated, it ought nevertheless to awaken us into some effectual measures..."
Slavery was immensely profitable to some masters. James Madison told a British visitor shortly after the American Revolution that he could make $257 on every Negro in a year, and spend only $12 or $13 on his keep. Another viewpoint was of slaveowner Landon Carter, writing about fifty years earlier, complaining that his slaves so neglected their work and were so uncooperative ("either cannot or will not work") that he began to wonder if keeping them was worthwhile.
Some historians have painted a picture—based on the infrequency of organized rebellions and the ability of the South to maintain slavery for two hundred years—of a slave population made submissive by their condition; with their African heritage destroyed, they were, as Stanley Elkins said, made into "Sambos," "a society of helpless dependents." Or as another historian, Ulrich Phillips, said, "by racial quality submissive." But looking at the totality of slave behavior, at the resistance of everyday life, from quiet noncooperation in work to running away, the picture becomes different.
In 1710, warning the Virginia Assembly, Governor Alexander Spotswood said:
...freedom wears a cap which can without a tongue, call together all those who long to shake off the fetters of slavery and as such an Insurrection would surely be attended with most dreadful consequences so I we cannot be too early in providing against it, both by putting our selves in a better posture of defence and by making a law to prevent the consultations of those Negroes.
Indeed, considering the harshness of punishment for running away, that so many blacks did run away must be a sign of a powerful rebelliousness. All through the 1700s, the Virginia slave code read:
Whereas many times slaves run away and lie hid and lurking in swamps, woods, and other obscure places, killing hogs, and commiting other injuries to the inhabitants... if the slave does not immediately return, anyone whatsoever may kill or destroy such slaves by such ways and means as he... shall think fit... If the slave is apprehended... it shall... be lawful for the county court, to order such punishment for the said slave, either by dismembering, or in any other way... as they in their discretion shall think fit, for the reclaiming any such incorrigible slave, and terrifying others from the like practices...
Mullin found newspaper advertisements between 1736 and 1801 for 1,138 men runaways, and 141 women. One consistent reason for running away was to find members of one's family—showing that despite the attempts of the slave system to destroy family ties by not allowing marriages and by separating families, slaves would face death and mutilation to get together.
In Maryland, where slaves were about one-third of the population in 1750, slavery had been written into law since the 1660s, and statutes for controlling rebellious slaves were passed. There were cases where slave women killed their masters, sometimes by poisoning them, sometimes by burning tobacco houses and homes. Punishment ranged from whipping and branding to execution, but the trouble continued. In 1742, seven slaves were put to death for murdering their master.
Fear of slave revolt seems to have been a permanent fact of plantation life. William Byrd, a wealthy Virginia slaveowner, wrote in 1736:
We have already at least 10,000 men of these descendants of Ham, fit to bear arms, and these numbers increase every day, as well by birth as by importation. And in case there should arise a man of desperate fortune, he might with more advantage than Cataline kindle a servile war... and tinge our rivers wide as they are with blood.
It was an intricate and powerful system of control that the slaveowners developed to maintain their labor supply and their way of life, a system both subtle and crude, involving every device that social orders employ for keeping power and wealth where it is. As Kenneth Stampp puts it:
A wise master did not take seriously the belief that Negroes were natural-born slaves. He knew better. He knew that Negroes freshly imported from Africa had to be broken into bondage; that each succeeding generation had to be carefully trained. This was no easy task, for the bondsman rarely submitted willingly. Moreover, he rarely submitted completely. In most cases there was no end to the need for control—at least not until old age reduced the slave to a condition of helplessness.
The system was psychological and physical at the same time. The slaves were taught discipline, were impressed again and again with the idea of their own inferiority to "know their place," to see blackness as a sign of subordination, to be awed by the power of the master, to merge their interest with the master's, destroying their own individual needs. To accomplish this there was the discipline of hard labor, the breakup of the slave family, the lulling effects of religion (which sometimes led to "great mischief," as one slaveholder reported), the creation of disunity among slaves by separating them into field slaves and more privileged house slaves, and finally the power of law and the immediate power of the overseer to invoke whipping, burning, mutilation, and death. Dismemberment was provided for in the Virginia Code of 1705. Maryland passed a law in 1723 providing for cutting off the ears of blacks who struck whites, and that for certain serious crimes, slaves should be hanged and the body quartered and exposed.
Still, rebellions took place—not many, but enough to create constant fear among white planters. The first large-scale revolt in the North American colonies took place in New York in 1712. In New York, slaves were 10 percent of the population, the highest proportion in the northern states, where economic conditions usually did not require large numbers of field slaves. About twenty- five blacks and two Indians set fire to a building, then killed nine whites who came on the scene. They were captured by soldiers, put on trial, and twenty-one were executed. The governor's report to England said: "Some were burnt, others were hanged, one broke on the wheel, and one hung alive in chains in the town..." One had been burned over a slow fire for eight to ten hours—all this to serve notice to other slaves.
A letter to London from South Carolina in 1720 reports:
I am now to acquaint you that very lately we have had a very wicked and barbarous plot of the designe of the negroes rising with a designe to destroy all the white people in the country and then to take Charles Town in full body but it pleased God it was discovered and many of them taken prisoners and some burnt and some hang'd and some banish'd.
Around this time there were a number of fires in Boston and New Haven, suspected to be the work of Negro slaves. As a result, one Negro was executed in Boston, and the Boston Council ruled that any slaves who on their own gathered in groups of two or more were to be punished by whipping.
At Stono, South Carolina, in 1739, about twenty slaves rebelled, killed two warehouse guards, stole guns and gunpowder, and headed south, killing people in their way, and burning buildings. They were joined by others, until there were perhaps eighty slaves in all and, according to one account of the time, "they called out Liberty, marched on with Colours displayed, and two Drums beating." The militia found and attacked them. In the ensuing battle perhaps fifty slaves and twenty-five whites were killed before the uprising was crushed.
Herbert Aptheker, who did detailed research on slave resistance in North America for his book American Negro Slave Revolts, found about 250 instances where a minimum of ten slaves joined in a revolt or conspiracy.
From time to time, whites were involved in the slave resistance. As early as 1663, indentured white servants and black slaves in Gloucester County, Virginia, formed a conspiracy to rebel and gain their freedom. The plot was betrayed, and ended with executions. Mullin reports that the newspaper notices of runaways in Virginia often warned "ill-disposed" whites about harboring fugitives. Sometimes slaves and free men ran off together, or cooperated in crimes together. Sometimes, black male slaves ran off and joined white women. From time to time, white ship captains and watermen dealt with runaways, perhaps making the slave a part of the crew.
In New York in 1741, there were ten thousand whites in the city and two thousand black slaves. It had been a hard winter and the poor—slave and free—had suffered greatly. When mysterious fires broke out, blacks and whites were accused of conspiring together. Mass hysteria developed against the accused. After a trial full of lurid accusations by informers, and forced confessions, two white men and two white women were executed, eighteen slaves were hanged, and thirteen slaves were burned alive.
Only one fear was greater than the fear of black rebellion in the new American colonies. That was the fear that discontented whites would join black slaves to overthrow the existing order. In the early years of slavery, especially, before racism as a way of thinking was firmly ingrained, while white indentured servants were often treated as badly as black slaves, there was a possibility of cooperation. As Edmund Morgan sees it:
There are hints that the two despised groups initially saw each other as sharing the same predicament. It was common, for example, for servants and slaves to run away together, steal hogs together, get drunk together. It was not uncommon for them to make love together. In Bacon's Rebellion, one of the last groups to surrender was a mixed band of eighty negroes and twenty English servants.
As Morgan says, masters, "initially at least, perceived slaves in much the same way they had always perceived servants... shiftless, irresponsible, unfaithful, ungrateful, dishonest..." And "if freemen with disappointed hopes should make common cause with slaves of desperate hope, the results might be worse than anything Bacon had done."
And so, measures were taken. About the same time that slave codes, involving discipline and punishment, were passed by the Virginia Assembly,
Virginia's ruling class, having proclaimed that all white men were superior to black, went on to offer their social (but white) inferiors a number of benefits previously denied them. In 1705 a law was passed requiring masters to provide white servants whose indenture time was up with ten bushels of corn, thirty shillings, and a gun, while women servants were to get 15 bushels of corn and forty shillings. Also, the newly freed servants were to get 50 acres of land.
Morgan concludes: "Once the small planter felt less exploited by taxation and began to prosper a little, he became less turbulent, less dangerous, more respectable. He could begin to see his big neighbor not as an extortionist but as a powerful protector of their common interests."
We see now a complex web of historical threads to ensnare blacks for slavery in America: the desperation of starving settlers, the special helplessness of the displaced African, the powerful incentive of profit for slave trader and planter, the temptation of superior status for poor whites, the elaborate controls against escape and rebellion, the legal and social punishment of black and white collaboration.
The point is that the elements of this web are historical, not "natural." This does not mean that they are easily disentangled, dismantled. It means only that there is a possibility for something else, under historical conditions not yet realized. And one of these conditions would be the elimination of that class exploitation which has made poor whites desperate for small gifts of status, and has prevented that unity of black and white necessary for joint rebellion and reconstruction.
Around 1700, the Virginia House of Burgesses declared:
The Christian Servants in this country for the most part consists of the Worser Sort of the people of Europe. And since... such numbers of Irish and other Nations have been brought in of which a great many have been soldiers in the late warrs that according to our present Circumstances we can hardly governe them and if they were fitted with Armes and had the Opertunity of meeting together by Musters we have just reason to fears they may rise upon us.
It was a kind of class consciousness, a class fear. There were things happening in early Virginia, and in the other colonies, to warrant it. | http://www.historyisaweapon.org/defcon1/zinncolorline.html |
4.28125 | West Nile virus
|West Nile fever|
|Classification and external resources|
|Patient UK||West Nile virus|
West Nile virus (WNV) is a mosquito-borne zoonotic arbovirus belonging to the genus Flavivirus in the family Flaviviridae. It is found in temperate and tropical regions of the world. It was first identified in the West Nile subregion in the East African nation of Uganda in 1937. Prior to the mid-1990s, WNV disease occurred only sporadically and was considered a minor risk for humans, until an outbreak in Algeria in 1994, with cases of WNV-caused encephalitis, and the first large outbreak in Romania in 1996, with a high number of cases with neuroinvasive disease. WNV has now spread globally, with the first case in the Western Hemisphere being identified in New York City in 1999; over the next five years, the virus spread across the continental United States, north into Canada, and southward into the Caribbean islands and Latin America. WNV also spread to Europe, beyond the Mediterranean Basin, and a new strain of the virus was identified in Italy in 2012. WNV spreads on an ongoing basis in Africa, Asia, Australia, the Middle East, Europe, Canada and in the United States. In 2012 the US experienced one of its worst epidemics in which 286 people died, with the state of Texas being hard hit by this virus.
The main mode of WNV transmission is via various species of mosquitoes, which are the prime vector, with birds being the most commonly infected animal and serving as the prime reservoir host—especially passerines, which are of the largest order of birds, Passeriformes. WNV has been found in various species of ticks, but current research suggests they are not important vectors of the virus. WNV also infects various mammal species, including humans, and has been identified in reptilian species, including alligators and crocodiles, and also in amphibians. Not all animal species that are susceptible to WNV infection, including humans, and not all bird species develop sufficient viral levels to transmit the disease to uninfected mosquitoes, and are thus not considered major factors in WNV transmission.
Approximately 80% of West Nile virus infections in humans are subclinical, which cause no symptoms. In the cases where symptoms do occur—termed West Nile fever in cases without neurological disease—the time from infection to the appearance of symptoms (incubation period) is typically between 2 and 15 days. Symptoms may include fever, headaches, fatigue, muscle pain or aches (myalgias), malaise, nausea, anorexia, vomiting, and rash. Less than 1% of the cases are severe and result in neurological disease when the central nervous system is affected. People of advanced age, the very young, or those with immunosuppression, either medically induced, such as those taking immunosupressive drugs, or due to a pre-existing medical condition such as HIV infection, are most susceptible. The specific neurological diseases that may occur are West Nile encephalitis, which causes inflammation of the brain, West Nile meningitis, which causes inflammation of the meninges, which are the protective membranes that cover the brain and spinal cord, West Nile meningoencephalitis, which causes inflammation of the brain and also the meninges surrounding it, and West Nile poliomyelitis—spinal cord inflammation, which results in a syndrome similar to polio, which may cause acute flaccid paralysis.
Currently, no vaccine against WNV infection is available. The best method to reduce the rates of WNV infection is mosquito control on the part of municipalities, businesses and individual citizens to reduce breeding populations of mosquitoes in public, commercial and private areas via various means including eliminating standing pools of water where mosquitoes breed, such as in old tires, buckets, and unused swimming pools. On an individual basis, the use of personal protective measures to avoid being bitten by an infected mosquito, via the use of mosquito repellent, window screens, avoiding areas where mosquitoes are more prone to congregate, such as near marshes, and areas with heavy vegetation, and being more vigilant from dusk to dawn when mosquitoes are most active offers the best defense. In the event of being bitten by an infected mosquito, familiarity of the symptoms of WNV on the part of laypersons, physicians and allied health professions affords the best chance of receiving timely medical treatment, which may aid in reducing associated possible complications and also appropriate palliative care.
Signs and symptoms
The incubation period for WNV—the amount of time from infection to symptom onset—is typically from between 2 and 15 days. Headache can be a prominent symptom of WNV fever, meningitis, encephalitis, meningoencephalitis, and it may or may not be present in poliomyelitis-like syndrome. Thus, headache is not a useful indicator of neuroinvasive disease.
- West Nile fever (WNF), which occurs in 20 percent of cases, is a febrile syndrome that causes flu-like symptoms. Most characterizations of WNF generally describe it as a mild, acute syndrome lasting 3 to 6 days after symptom onset. Systematic follow-up studies of patients with WNF have not been done, so this information is largely anecdotal. In addition to a high fever, headache, chills, excessive sweating, weakness, fatigue, swollen lymph nodes, drowsiness, pain in the joints and flu-like symptoms. Gastrointestinal symptoms that may occur include nausea, vomiting, loss of appetite, and diarrhea. Fewer than one-third of patients develop a rash.
- West Nile neuroinvasive disease (WNND), which occurs in less than 1 percent of cases, is when the virus infects the central nervous system resulting in meningitis, encephalitis, meningoencephalitis or a poliomyelitis-like syndrome. Many patients with WNND have normal neuroimaging studies, although abnormalities may be present in various cerebral areas including the basal ganglia, thalamus, cerebellum, and brainstem.
- West Nile virus encephalitis (WNE) is the most common neuroinvasive manifestation of WNND. WNE presents with similar symptoms to other viral encephalitis with fever, headaches, and altered mental status. A prominent finding in WNE is muscular weakness (30 to 50 percent of patients with encephalitis), often with lower motor neuron symptoms, flaccid paralysis, and hyporeflexia with no sensory abnormalities.
- West Nile meningitis (WNM) usually involves fever, headache, and stiff neck. Pleocytosis, an increase of white blood cells in cerebrospinal fluid, is also present. Changes in consciousness are not usually seen and are mild when present.
- West Nile meningoencephalitis is inflammation of both the brain (encephalitis) and meninges (meningitis).
- West Nile poliomyelitis (WNP), an acute flaccid paralysis syndrome associated with WNV infection, is less common than WNM or WNE. This syndrome is generally characterized by the acute onset of asymmetric limb weakness or paralysis in the absence of sensory loss. Pain sometimes precedes the paralysis. The paralysis can occur in the absence of fever, headache, or other common symptoms associated with WNV infection. Involvement of respiratory muscles, leading to acute respiratory failure, can sometimes occur.
- West-Nile reversible paralysis, Like WNP, the weakness or paralysis is asymmetric. Reported cases have been noted to have an initial preservation of deep tendon reflexes, which is not expected for a pure anterior horn involvement. Disconnect of upper motor neuron influences on the anterior horn cells possibly by myelitis or glutamate excitotoxicity have been suggested as mechanisms. The prognosis for recovery is excellent.
- Nonneurologic complications of WNV infection that may rarely occur include fulminant hepatitis, pancreatitis, myocarditis, rhabdomyolysis, orchitis, nephritis, optic neuritis and cardiac dysrhythmias and hemorrhagic fever with coagulopathy. Chorioretinitis may also be more common than previously thought.
- Cutaneous manifestations specifically rashes, are not uncommon in WNV-infected patients; however, there is a paucity of detailed descriptions in case reports and there are few clinical images widely available. Punctate erythematous, macular, and papular eruptions, most pronounced on the extremities have been observed in WNV cases and in some cases histopathologic findings have shown a sparse superficial perivascular lymphocytic infiltrate, a manifestation commonly seen in viral exanthems. A literature review provides support that this punctate rash is a common cutaneous presentation of WNV infection.
|West Nile Virus|
|Group:||Group IV ((+)ssRNA)|
|Species:||West Nile virus|
WNV is one of the Japanese encephalitis antigenic serocomplex of viruses. Image reconstructions and cryoelectron microscopy reveal a 45–50 nm virion covered with a relatively smooth protein surface. This structure is similar to the dengue fever virus; both belong to the genus Flavivirus within the family Flaviviridae. The genetic material of WNV is a positive-sense, single strand of RNA, which is between 11,000 and 12,000 nucleotides long; these genes encode seven nonstructural proteins and three structural proteins. The RNA strand is held within a nucleocapsid formed from 12-kDa protein blocks; the capsid is contained within a host-derived membrane altered by two viral glycoproteins.
Studies of phylogenetic lineages determined WNV emerged as a distinct virus around 1000 years ago. This initial virus developed into two distinct lineages, lineage 1 and its multiple profiles is the source of the epidemic transmission in Africa and throughout the world. Lineage 2 was considered an Africa zoonosis. However, in 2008, lineage 2, previously only seen in horses in sub-Saharan Africa and Madagascar, began to appear in horses in Europe, where the first known outbreak affected 18 animals in Hungary in 2008. Lineage 1 West Nile virus was detected in South Africa in 2010 in a mare and her aborted fetus; previously, only lineage 2 West Nile virus had been detected in horses and humans in South Africa. A 2007 fatal case in a killer whale in Texas broadened the known host range of West Nile virus to include cetaceans.
The United States virus was very closely related to a lineage 1 strain found in Israel in 1998. Since the first North American cases in 1999, the virus has been reported throughout the United States, Canada, Mexico, the Caribbean, and Central America. There have been human cases and equine cases, and many birds are infected. The Barbary macaque, Macaca sylvanus, was the first nonhuman primate to contract WNV. Both the United States and Israeli strains are marked by high mortality rates in infected avian populations; the presence of dead birds—especially Corvidae—can be an early indicator of the arrival of the virus.
West Nile virus (WNV) is transmitted through female mosquitoes, which are the prime vectors of the virus. Only females feed on blood, and different species take a blood meal's from different types of vertebrate hosts. The important mosquito vectors vary according to geographical area; in the United States, Culex pipiens (Eastern United States, and urban and residential areas of the United States north of 36–39°N), Culex tarsalis (Midwest and West), and Culex quinquefasciatus (Southeast) are the main vector species.
The mosquito species that are most frequently infected with WNV feed primarily on birds. Mosquitoes show further selectivity, exhibiting preference for different species of birds. In the United States, WNV mosquito vectors feed on members of the Corvidae and thrush family more often that would be expected from their abundance,. Among the preferred species within these families are the American crow, a corvid, and the American robin (Turdus migratorius), a thrush.
Some species of birds develop sufficient viral levels (>~104.2 log PFU/ml;) after being infected to transmit the infection to biting mosquitoes that in turn go on to infect other birds. In birds that die from WNV, death usually occurs after 4 to 6 days. In mammals, and several species of birds the virus does not multiply as readily (i.e., does not develop high viremia during infection), and mosquitoes biting infected these hosts are not believed to ingest sufficient virus to become infected, making them so-called dead-end hosts. As a result of the differential infectiousness of hosts, the feeding patterns of mosquitoes play an important role in WNV transmission, and they are partly genetically controlled, even within a species.
Direct human-to-human transmission initially was believed to be caused only by occupational exposure, such as in a laboratory setting, or conjunctive exposure to infected blood. The US outbreak identified additional transmission methods through blood transfusion, organ transplant, intrauterine exposure, and breast feeding. Since 2003, blood banks in the United States routinely screen for the virus among their donors. As a precautionary measure, the UK's National Blood Service initially ran a test for this disease in donors who donate within 28 days of a visit to the United States, Canada, or the northeastern provinces of Italy, and the Scottish National Blood Transfusion Service asks prospective donors to wait 28 days after returning from North America or the northeastern provinces of Italy before donating.
Recently, the potential for mosquito saliva to affect the course of WNV disease was demonstrated. Mosquitoes inoculate their saliva into the skin while obtaining blood. Mosquito saliva is a pharmacological cocktail of secreted molecules, principally proteins, that can affect vascular constriction, blood coagulation, platelet aggregation, inflammation, and immunity. It clearly alters the immune response in a manner that may be advantageous to a virus. Studies have shown it can specifically modulate the immune response during early virus infection, and mosquito feeding can exacerbate WNV infection, leading to higher viremia and more severe forms of disease.
Vertical transmission, the transmission of a viral or bacterial disease from the female of the species to her offspring, has been observed in various West Nile virus studies, amongst different species of mosquitoes in both the laboratory and in nature. Mosquito progeny infected vertically in autumn, may potentially serve as a mechanism for WNV to overwinter and initiate enzootic horizontal transmission the following spring, although it likely plays little role in transmission in the summer and fall.
Risk factors independently associated with developing a clinical infection with WNV include a suppressed immune system and a patient history of organ transplantation. For neuroinvasive disease the additional risk factors include older age (>50+), male sex, hypertension, and diabetes mellitus.
A genetic factor also appears to increase susceptibility to West Nile disease. A mutation of the gene CCR5 gives some protection against HIV but leads to more serious complications of WNV infection. Carriers of two mutated copies of CCR5 made up 4.0 to 4.5% of a sample of West Nile disease sufferers, while the incidence of the gene in the general population is only 1.0%.
Preliminary diagnosis is often based on the patient's clinical symptoms, places and dates of travel (if patient is from a nonendemic country or area), activities, and epidemiologic history of the location where infection occurred. A recent history of mosquito bites and an acute febrile illness associated with neurologic signs and symptoms should cause clinical suspicion of WNV.
Diagnosis of West Nile virus infections is generally accomplished by serologic testing of blood serum or cerebrospinal fluid (CSF), which is obtained via a lumbar puncture. Typical findings of WNV infection include lymphocytic pleocytosis, elevated protein level, reference glucose and lactic acid levels, and no erythrocytes.
Definitive diagnosis of WNV is obtained through detection of virus-specific antibody IgM and neutralizing antibodies. Cases of West Nile virus meningitis and encephalitis that have been serologically confirmed produce similar degrees of CSF pleocytosis and are often associated with substantial CSF neutrophilia. Specimens collected within eight days following onset of illness may not test positive for West Nile IgM, and testing should be repeated. A positive test for West Nile IgG in the absence of a positive West Nile IgM is indicative of a previous flavavirus infection and is not by itself evidence of an acute West Nile virus infection.
If cases of suspected West Nile virus infection, sera should be collected on both the acute and convalescent phases of the illness. Convalescent specimens should be collected 2–3 weeks after acute specimens.
It is common in serologic testing for cross-reactions to occur among flaviviruses such as dengue virus (DENV) and tick-borne encephalitis virus; this necessitates caution when evaluating serologic results of flaviviral infections.
Four FDA-cleared WNV IgM ELISA kits are commercially available from different manufacturers in the U.S., each of these kits is indicated for use on serum to aid in the presumptive laboratory diagnosis of WNV infection in patients with clinical symptoms of meningitis or encephalitis. Positive WNV test kits obtained via use of these kits should be confirmed by additional testing at a state health department laboratory or CDC.
In fatal cases, nucleic acid amplification, histopathology with immunohistochemistry, and virus culture of autopsy tissues can also be useful. Only a few state laboratories or other specialized laboratories, including those at CDC, are capable of doing this specialized testing.
A number of various diseases may present with symptoms similar to those caused by a clinical West Nile virus infection. Those causing neuroinvasive disease symptoms include the enterovirus infection and bacterial meningitis. Accounting for differential diagnoses is a crucial step in the definitive diagnosis of WNV infection. Consideration of a differential diagnosis is required when a patient presents with unexplained febrile illness, extreme headache, encephalitis or meningitis. Diagnostic and serologic laboratory testing using polymerase chain reaction (PCR) testing and viral culture of CSF to identify the specific pathogen causing the symptoms, is the only currently available means of differentiating between causes of encephalitis and meningitis.
Personal protective measures can be taken to greatly reduce the risk of being bitten by an infected mosquito:
- Using insect repellent on exposed skin to repel mosquitoes. EPA-registered repellents include products containing DEET (N,N-diethylmetatoluamide) and picaridin (KBR 3023). DEET concentrations of 30% to 50% are effective for several hours. Picaridin, available at 7% and 15% concentrations, needs more frequent application. DEET formulations as high as 50% are recommended for both adults and children over two months of age. Protect infants less than two months of age by using a carrier draped with mosquito netting with an elastic edge for a tight fit.
- When using sunscreen, apply sunscreen first and then repellent. Repellent should be washed off at the end of the day before going to bed.
- Wear long-sleeve shirts, which should be tucked in, long pants, socks, and hats to cover exposed skin. Insect repellents should be applied over top of protective clothing for greater protection. Do not apply insect repellents underneath clothing.
- The application of permethrin-containing (e.g., Permanone) or other insect repellents to clothing, shoes, tents, mosquito nets, and other gear for greater protection. Permethrin is not labeled for use directly on skin. Most repellent is generally removed from clothing and gear by a single washing, but permethrin-treated clothing is effective for up to five washings.
- Be aware that most mosquitoes that transmit disease are most active during twilight periods (dawn and dusk or in the evening). A notable exception is the Asian tiger mosquito, which is a daytime feeder and is more apt to be found in, or on the periphery of, shaded areas with heavy vegetation. They are now widespread in the United States, and in Florida they have been found in all 67 counties.
- Staying in air-conditioned or well-screened housing, and/or sleeping under an insecticide-treated bed net. Bed nets should be tucked under mattresses and can be sprayed with a repellent if not already treated with an insecticide.
Monitoring and control
West Nile virus can be sampled from the environment by the pooling of trapped mosquitoes via ovitraps, carbon dioxide-baited light traps, and gravid traps, testing blood samples drawn from wild birds, dogs, and sentinel monkeys, as well as testing brains of dead birds found by various animal control agencies and the public.
Testing of the mosquito samples requires the use of reverse-transcriptase PCR (RT-PCR) to directly amplify and show the presence of virus in the submitted samples. When using the blood sera of wild birds and sentinel chickens, samples must be tested for the presence of WNV antibodies by use of immunohistochemistry (IHC) or enzyme-linked immunosorbent assay (ELISA).
Dead birds, after necropsy, or their oral swab samples collected on specific RNA-preserving filter paper card, can have their virus presence tested by either RT-PCR or IHC, where virus shows up as brown-stained tissue because of a substrate-enzyme reaction.
West Nile control is achieved through mosquito control, by elimination of mosquito breeding sites such as abandoned pools, applying larvacide to active breeding areas, and targeting the adult population via lethal ovitraps and aerial spraying of pesticides.
Environmentalists have condemned attempts to control the transmitting mosquitoes by spraying pesticide, saying the detrimental health effects of spraying outweigh the relatively few lives that may be saved, and more environmentally friendly ways of controlling mosquitoes are available. They also question the effectiveness of insecticide spraying, as they believe mosquitoes that are resting or flying above the level of spraying will not be killed; the most common vector in the northeastern United States, Culex pipiens, is a canopy feeder.
Eggs of permanent water mosquitoes can hatch, and the larvae survive, in only a few ounces of water. Less than half the amount that may collect in a discarded coffee cup. Floodwater species lay their eggs on wet soil or other moist surfaces. Hatch time is variable for both types; under favorable circumstances, i.e. warm weather, the eggs of some species may hatch in as few as 1–3 days after being laid.
Used tires often hold stagnant water and are a breeding ground for many species of mosquitoes. Some species such as the Asian tiger mosquito prefer manmade containers, such as tires, in which to lay their eggs. The rapid spread of this aggressive daytime feeding species beyond their native range has been attributed to the used tire trade.
No specific treatment is available for WNV infection. In severe cases treatment consists of supportive care that often involves hospitalization, intravenous fluids, respiratory support, and prevention of secondary infections.
While the general prognosis is favorable, current studies indicate that West Nile Fever can often be more severe than previously recognized, with studies of various recent outbreaks indicating that it may take as long as 60–90 days to recover. People with milder WNF are just as likely as those with more severe manifestations of neuroinvasive disease to experience multiple long term (>1+ years) somatic complaints such as tremor, and dysfunction in motor skills and executive functions. People with milder illness are just as likely as people with more severe illness to experience adverse outcomes. Recovery is marked by a long convalescence with fatigue. One study found that neuroinvasive WNV infection was associated with an increased risk for subsequent kidney disease.
WNV was first isolated from a feverish 37-year-old woman at Omogo in the West Nile District of Uganda in 1937 during research on yellow fever virus. A series of serosurveys in 1939 in central Africa found anti-WNV positive results ranging from 1.4% (Congo) to 46.4% (White Nile region, Sudan). It was subsequently identified in Egypt (1942) and India (1953), a 1950 serosurvey in Egypt found 90% of those over 40 years in age had WNV antibodies. The ecology was characterized in 1953 with studies in Egypt and Israel. The virus became recognized as a cause of severe human meningoencephalitis in elderly patients during an outbreak in Israel in 1957. The disease was first noted in horses in Egypt and France in the early 1960s and found to be widespread in southern Europe, southwest Asia and Australia.
The first appearance of WNV in the Western Hemisphere was in 1999 with encephalitis reported in humans, dogs, cats, and horses, and the subsequent spread in the United States may be an important milestone in the evolving history of this virus. The American outbreak began in College Point, Queens in New York City and was later spread to the neighboring states of New Jersey and Connecticut. The virus is believed to have entered in an infected bird or mosquito, although there is no clear evidence. West Nile virus is now endemic in Africa, Europe, the Middle East, west and central Asia, Oceania (subtype Kunjin), and most recently, North America and is spreading into Central and South America.
Recent outbreaks of West Nile virus encephalitis in humans have occurred in Algeria (1994), Romania (1996 to 1997), the Czech Republic (1997), Congo (1998), Russia (1999), the United States (1999 to 2009), Canada (1999–2007), Israel (2000) and Greece (2010).
Outdoor workers (including biological fieldworkers, construction workers, farmers, landscapers, and painters), healthcare personnel, and laboratory personnel who perform necropsies on animals are at risk of contracting WNV.
A vaccine for horses (ATCvet code: QI05) based on killed viruses exists; some zoos have given this vaccine to their birds, although its effectiveness is unknown. Dogs and cats show few if any signs of infection. There have been no known cases of direct canine-human or feline-human transmission; although these pets can become infected, it is unlikely they are, in turn, capable of infecting native mosquitoes and thus continuing the disease cycle. AMD3100, which had been proposed as an antiretroviral drug for HIV, has shown promise against West Nile encephalitis. Morpholino antisense oligos conjugated to cell penetrating peptides have been shown to partially protect mice from WNV disease. There have also been attempts to treat infections using ribavirin, intravenous immunoglobulin, or alpha interferon. GenoMed, a U.S. biotech company, has found that blocking angiotensin II can treat the "cytokine storm" of West Nile virus encephalitis as well as other viruses.
- Nash D, Mostashari F, Fine A, et al. (June 2001). "The outbreak of West Nile virus infection in the New York City area in 1999". N. Engl. J. Med. 344 (24): 1807–14. doi:10.1056/NEJM200106143442401. PMID 11407341.
- Barzon L, Pacenti M, Franchin E, Lavezzo E, Martello T, Squarzon L, Toppo S, Fiorin F, Marchiori G, Russo F, Cattai M, Cusinato R, Palu G (2012). "New endemic West Nile virus lineage 1a in northern Italy, July 2012". Euro Surveillance : Bulletin Européen Sur Les Maladies Transmissibles = European Communicable Disease Bulletin 17 (31). PMID 22874456. Retrieved 2014-12-08.
- Chen, Chen C.; Jenkins, Emily; Epp, Tasha; Waldner, Cheryl; Curry, Philip S.; Soos, Catherine (2013-07-22). "Climate Change and West Nile Virus in a Highly Endemic Region of North America". International Journal of Environmental Research and Public Health 10 (7): 3052–3071. doi:10.3390/ijerph10073052. PMC 3734476. PMID 23880729.
- Murray KO, Ruktanonchai D, Hesalroad D, Fonken E, Nolan MS (November 2013). "West Nile virus, Texas, USA, 2012". Emerging Infectious Diseases 19 (11): 1836–8. doi:10.3201/eid1911.130768. PMC 3837649. PMID 24210089. Retrieved 2014-12-08.
- Fox, M. (May 13, 2013). "2012 was deadliest year for West Nile in US, CDC says". NBC News. Retrieved May 13, 2013.
- Steinman, A.; Banet-Noach, C.; Tal, S.; Levi, O.; Simanov, L.; Perk, S.; Malkinson, M.; Shpigel, N. (2003). "West Nile Virus Infection in Crocodiles". Emerging Infectious Diseases 9 (7): 887–889. doi:10.3201/eid0907.020816. PMC 3023443. PMID 12899140.
- Klenk, K.; Snow, J.; Morgan, K.; Bowen, R.; Stephens, M.; Foster, F.; Gordy, P.; Beckett, S.; Komar, N.; Gubler, D.; Bunning, M. (2004). "Alligators as West Nile Virus Amplifiers". Emerging Infectious Diseases 10 (12): 2150–2155. doi:10.3201/eid1012.040264. PMC 3323409. PMID 15663852.
- "West Nile Virus: What You Need to Know CDC Fact Sheet". www.CDC.gov. Retrieved 2012-04-09.
- Olejnik E (1952). "Infectious adenitis transmitted by Culex molestus". Bull Res Counc Isr 2: 210–1.
- Davis LE, DeBiasi R, Goade DE, et al. (Sep 2006). "West Nile virus neuroinvasive disease". Ann Neurol 60 (3): 286–300. doi:10.1002/ana.20959. PMID 16983682.
- Flores Anticona EM, Zainah H, Ouellette DR, Johnson LE (2012). "Two case reports of neuroinvasive west nile virus infection in the critical care unit". Case Rep Infect Dis 2012: 839458. doi:10.1155/2012/839458. PMC 3433121. PMID 22966470.
- Carson PJ, Konewko P, Wold KS, et al. (2006). "Long-term clinical and neuropsychological outcomes of West Nile virus infection". Clin. Infect. Dis. 43 (6): 723–30. doi:10.1086/506939. PMID 16912946.
- Mojumder, D. K., Agosto, M., Wilms, H.; et al. (March 2014). "Is initial preservation of deep tendon reflexes in West Nile Virus paralysis a good prognostic sign?". Neurology Asia 19 (1): 93–97. PMC 4229851. PMID 25400704.
- Asnis DS, Conetta R, Teixeira AA, Waldman G, Sampson BA (March 2000). "The West Nile Virus outbreak of 1999 in New York: the Flushing Hospital experience". Clin. Infect. Dis. 30 (3): 413–8. doi:10.1086/313737. PMID 10722421.
- Montgomery SP, Chow CC, Smith SW, Marfin AA, O'Leary DR, Campbell GL (2005). "Rhabdomyolysis in patients with west nile encephalitis and meningitis". Vector-Borne and Zoonotic Diseases 5 (3): 252–7. doi:10.1089/vbz.2005.5.252. PMID 16187894.
- Smith RD, Konoplev S, DeCourten-Myers G, Brown T (February 2004). "West Nile virus encephalitis with myositis and orchitis". Hum. Pathol. 35 (2): 254–8. doi:10.1016/j.humpath.2003.09.007. PMID 14991545.
- Anninger WV, Lomeo MD, Dingle J, Epstein AD, Lubow M (2003). "West Nile virus-associated optic neuritis and chorioretinitis". Am. J. Ophthalmol. 136 (6): 1183–5. doi:10.1016/S0002-9394(03)00738-4. PMID 14644244.
- Paddock CD, Nicholson WL, Bhatnagar J, et al. (June 2006). "Fatal hemorrhagic fever caused by West Nile virus in the United States". Clin. Infect. Dis. 42 (11): 1527–35. doi:10.1086/503841. PMID 16652309.
- Shaikh S, Trese MT (2004). "West Nile virus chorioretinitis". Br J Ophthalmol 88 (12): 1599–60. doi:10.1136/bjo.2004.049460. PMC 1772450. PMID 15548822.
- Anderson RC, Horn KB, Hoang MP, Gottlieb E, Bennin B (November 2004). "Punctate exanthem of West Nile Virus infection: report of 3 cases". J. Am. Acad. Dermatol. 51 (5): 820–3. doi:10.1016/j.jaad.2004.05.031. PMID 15523368.
- Lanciotti RS, Ebel GD, Deubel V, et al. (June 2002). "Complete genome sequences and phylogenetic analysis of West Nile virus strains isolated from the United States, Europe, and the Middle East". Virology 298 (1): 96–105. doi:10.1006/viro.2002.1449. PMID 12093177.
- Galli M, Bernini F, Zehender G (July 2004). "Alexander the Great and West Nile virus encephalitis". Emerging Infect. Dis. 10 (7): 1330–2; author reply 1332–3. doi:10.3201/eid1007.040396. PMID 15338540.
- West, Christy (2010-02-08). "Different West Nile Virus Genetic Lineage Evolving?". The Horse. Retrieved 2010-02-10. From statements by Orsolya Kutasi, DVM, of the Szent Istvan University, Hungary at the 2009 American Association of Equine Practitioners Convention, December 5–9, 2009.
- Venter M, Human S, van Niekerk S, Williams J, van Eeden C, Freeman F (August 2011). "Fatal neurologic disease and abortion in mare infected with lineage 1 West Nile virus, South Africa". Emerging Infect. Dis. 17 (8): 1534–6. doi:10.3201/eid1708.101794. PMC 3381566. PMID 21801644.
- St Leger J, Wu G, Anderson M, Dalton L, Nilson E, Wang D (2011). "West Nile virus infection in killer whale, Texas, USA, 2007". Emerging Infect. Dis. 17 (8): 1531–3. doi:10.3201/eid1708.101979. PMC 3381582. PMID 21801643.
- Hogan, C. Michael (2008). Barbary Macaque: Macaca sylvanus, GlobalTwitcher.com
- Hayes EB, Komar N, Nasci RS, Montgomery SP, O'Leary DR, Campbell GL (2005). "Epidemiology and transmission dynamics of West Nile virus disease". Emerging Infect. Dis. 11 (8): 1167–73. doi:10.3201/eid1108.050289a. PMC 3320478. PMID 16102302.
- Kilpatrick, A.M. (2011). "Globalization, land use, and the invasion of West Nile virus". Science 334 (6054): 323–327. doi:10.1126/science.1201010. PMC 3346291. PMID 22021850.
- Kilpatrick, AM, P Daszak, MJ Jones, PP Marra, LD Kramer (2006). "Host heterogeneity dominates West Nile virus transmission". Proceedings of the Royal Society B-Biological Sciences 273 (1599): 2327–2333. doi:10.1098/rspb.2006.3575. PMC 1636093. PMID 16928635.
- Kilpatrick, AM, SL LaDeau, PP Marra (2007). "Ecology of West Nile virus transmission and its impact on birds in the western hemisphere". Auk 124 (4): 1121–1136. doi:10.1642/0004-8038(2007)124[1121:eownvt]2.0.co;2.
- Komar, N, S Langevin, S Hinten, N Nemeth, E Edwards, D Hettler, B Davis, R Bowen, M Bunning (2003). "Experimental infection of North American birds with the New York 1999 strain of West Nile virus". Emerging Infectious Diseases 9 (3): 311–322. doi:10.3201/eid0903.020628. PMC 2958552. PMID 12643825.
- Centers for Disease Control and Prevention (CDC) (2002). "Laboratory-acquired West Nile virus infections—United States, 2002". MMWR Morb. Mortal. Wkly. Rep. 51 (50): 1133–5. PMID 12537288.
- Fonseca K, Prince GD, Bratvold J, et al. (2005). "West Nile virus infection and conjunctive exposure". Emerging Infect. Dis. 11 (10): 1648–9. doi:10.3201/eid1110.040212. PMC 3366727. PMID 16355512.
- Centers for Disease Control and Prevention (CDC) (2002). "Investigation of blood transfusion recipients with West Nile virus infections". MMWR Morb. Mortal. Wkly. Rep. 51 (36): 823. PMID 12269472.
- Centers for Disease Control and Prevention (CDC) (2002). "West Nile virus infection in organ donor and transplant recipients—Georgia and Florida, 2002". MMWR Morb. Mortal. Wkly. Rep. 51 (35): 790. PMID 12227442.
- Centers for Disease Control and Prevention (CDC) (2002). "Intrauterine West Nile virus infection—New York, 2002". MMWR Morb. Mortal. Wkly. Rep. 51 (50): 1135–6. PMID 12537289.
- Centers for Disease Control and Prevention (CDC) (2002). "Possible West Nile virus transmission to an infant through breast-feeding—Michigan, 2002". MMWR Morb. Mortal. Wkly. Rep. 51 (39): 877–8. PMID 12375687.
- Centers for Disease Control and Prevention (CDC) (2003). "Detection of West Nile virus in blood donations—United States, 2003". MMWR Morb. Mortal. Wkly. Rep. 52 (32): 769–72. PMID 12917583.
- West Nile Virus. Scottish National Blood Transfusion Service.
- Schneider BS, McGee CE, Jordan JM, Stevenson HL, Soong L, Higgs S (2007). Baylis, Matthew, ed. "Prior exposure to uninfected mosquitoes enhances mortality in naturally-transmitted West Nile virus infection". PLoS ONE 2 (11): e1171. doi:10.1371/journal.pone.0001171. PMC 2048662. PMID 18000543.
- Styer LM, Bernard KA, Kramer LD (2006). "Enhanced early West Nile virus infection in young chickens infected by mosquito bite: effect of viral dose". Am. J. Trop. Med. Hyg. 75 (2): 337–45. PMID 16896145.
- Schneider BS, Soong L, Girard YA, Campbell G, Mason P, Higgs S (2006). "Potentiation of West Nile encephalitis by mosquito feeding". Viral Immunol. 19 (1): 74–82. doi:10.1089/vim.2006.19.74. PMID 16553552.
- Wasserman HA, Singh S, Champagne DE (2004). "Saliva of the Yellow Fever mosquito, Aedes aegypti, modulates murine lymphocyte function". Parasite Immunol. 26 (6–7): 295–306. doi:10.1111/j.0141-9838.2004.00712.x. PMID 15541033.
- Limesand KH, Higgs S, Pearson LD, Beaty BJ (2003). "Effect of mosquito salivary gland treatment on vesicular stomatitis New Jersey virus replication and interferon alpha/beta expression in vitro". J. Med. Entomol. 40 (2): 199–205. doi:10.1603/0022-2585-40.2.199. PMID 12693849.
- Wanasen N, Nussenzveig RH, Champagne DE, Soong L, Higgs S (2004). "Differential modulation of murine host immune response by salivary gland extracts from the mosquitoes Aedes aegypti and Culex quinquefasciatus". Med. Vet. Entomol. 18 (2): 191–9. doi:10.1111/j.1365-2915.2004.00498.x. PMID 15189245.
- Zeidner NS, Higgs S, Happ CM, Beaty BJ, Miller BR (1999). "Mosquito feeding modulates Th1 and Th2 cytokines in flavivirus susceptible mice: an effect mimicked by injection of sialokinins, but not demonstrated in flavivirus resistant mice". Parasite Immunol. 21 (1): 35–44. doi:10.1046/j.1365-3024.1999.00199.x. PMID 10081770.
- Schneider BS, Soong L, Zeidner NS, Higgs S (2004). "Aedes aegypti salivary gland extracts modulate anti-viral and TH1/TH2 cytokine responses to sindbis virus infection". Viral Immunol. 17 (4): 565–73. doi:10.1089/vim.2004.17.565. PMID 15671753.
- Bugbee, LM; Forte LR (September 2004). "The discovery of West Nile virus in overwintering Culex pipiens (Diptera: Culicidae) mosquitoes in Lehigh County, Pennsylvania". Journal of the American Mosquito Control Association 20 (3): 326–7. PMID 15532939.
- Goddard LB, Roth AE, Reisen WK, Scott TW (November 2003). "Vertical transmission of West Nile Virus by three California Culex (Diptera: Culicidae) species". J. Med. Entomol. 40 (6): 743–6. doi:10.1603/0022-2585-40.6.743. PMID 14765647.
- Kumar D, Drebot MA, Wong SJ, et al. (2004). "A seroprevalence study of West Nile virus infection in solid organ transplant recipients". Am. J. Transplant. 4 (11): 1883–8. doi:10.1111/j.1600-6143.2004.00592.x. PMID 15476490.
- Jean CM, Honarmand S, Louie JK, Glaser CA (December 2007). "Risk factors for West Nile virus neuroinvasive disease, California, 2005". Emerging Infect. Dis. 13 (12): 1918–20. doi:10.3201/eid1312.061265. PMC 2876738. PMID 18258047.
- Kumar D, Drebot MA, Wong SJ, et al. (2004). "A seroprevalence study of west nile virus infection in solid organ transplant recipients". Am. J. Transplant. 4 (11): 1883–8. doi:10.1111/j.1600-6143.2004.00592.x. PMID 15476490.
- Glass, WG; Lim JK; Cholera R; Pletnev AG; Gao JL; Murphy PM (October 17, 2005). "Chemokine receptor CCR5 promotes leukocyte trafficking to the brain and survival in West Nile virus infection". Journal of Experimental Medicine 202 (8): 1087–98. doi:10.1084/jem.20042530. PMC 2213214. PMID 16230476.
- Glass, WG; McDermott DH; Lim JK; Lekhong S; Yu SF; Frank WA; Pape J; Cheshier RC; Murphy PM (January 23, 2006). "CCR5 deficiency increases risk of symptomatic West Nile virus infection". Journal of Experimental Medicine 203 (1): 35–40. doi:10.1084/jem.20051970. PMC 2118086. PMID 16418398.
- Tyler KL, Pape J, Goody RJ, Corkill M, Kleinschmidt-DeMasters BK (February 2006). "CSF findings in 250 patients with serologically confirmed West Nile virus meningitis and encephalitis". Neurology 66 (3): 361–5. doi:10.1212/01.wnl.0000195890.70898.1f. PMID 16382032.
- "2012 DOHMH Advisory #8: West Nile Virus" (PDF). New York City Department of Health and Mental Hygiene. June 28, 2012.
- Papa A, Karabaxoglou D, Kansouzidou A (October 2011). "Acute West Nile virus neuroinvasive infections: cross-reactivity with dengue virus and tick-borne encephalitis virus". J. Med. Virol. 83 (10): 1861–5. doi:10.1002/jmv.22180. PMID 21837806.
- Rios L, Maruniak JE (October 2011). "Asian Tiger Mosquito, Aedes albopictus (Skuse) (Insecta: Diptera: Culicidae)". Department of Entomology and Nematology, University of Florida. EENY-319.
- Jozan, M; Evans R; McLean R; Hall R; Tangredi B; Reed L; Scott J (Fall 2003). "Detection of West Nile virus infection in birds in the United States by blocking ELISA and immunohistochemistry". Vector-Borne and Zoonotic Diseases 3 (3): 99–110. doi:10.1089/153036603768395799. PMID 14511579.
- Hall, RA; Broom AK; Hartnett AC; Howard MJ; Mackenzie JS (February 1995). "Immunodominant epitopes on the NS1 protein of MVE and KUN viruses serve as targets for a blocking ELISA to detect virus-specific antibodies in sentinel animal serum". Journal of Virological Methods 51 (2–3): 201–10. doi:10.1016/0166-0934(94)00105-P. PMID 7738140.
- California Department of Public Health Tutorial for Local Agencies to Safely Collect Dead Birds Oral Swab Samples on RNAse Cards for West Nile Virus Testing
- RNA virus preserving filter paper card. fortiusbio.com
- "Mosquito Monitoring and Management". National Park Service.
- Oklahoma State University: Mosquitoes and West Nile virus
- Benedict MQ, Levine RS, Hawley WA, Lounibos LP (2007). "Spread of the tiger: global risk of invasion by the mosquito Aedes albopictus". Vector-Borne and Zoonotic Diseases 7 (1): 76–85. doi:10.1089/vbz.2006.0562. PMC 2212601. PMID 17417960.
- Watson JT, Pertel PE, Jones RC, et al. (September 2004). "Clinical characteristics and functional outcomes of West Nile Fever". Ann. Intern. Med. 141 (5): 360–5. doi:10.7326/0003-4819-141-5-200409070-00010. PMID 15353427.
- Klee AL, Maidin B, Edwin B, et al. (Aug 2004). "Long-term prognosis for clinical West Nile virus infection". Emerg Infect Dis 10 (8): 1405–11. doi:10.3201/eid1008.030879. PMC 3320418. PMID 15496241.
- Nolan MS, Podoll AS, Hause AM, Akers KM, Finkel KW, Murray KO (2012). Wang, Tian, ed. "Prevalence of chronic kidney disease and progression of disease over time among patients enrolled in the Houston West Nile virus cohort". PLoS ONE 7 (7): e40374. doi:10.1371/journal.pone.0040374. PMC 3391259. PMID 22792293.
- "New Study Reveals: West Nile virus is far more menacing & harms far more people". The Guardian Express. The Guardian Express. 26 August 2012. Retrieved 26 August 2012.
- Smithburn KC, Hughes TP, Burke AW, Paul JH (June 1940). "A Neurotropic Virus Isolated from the Blood of a Native of Uganda". Am. J. Trop. Med. 20 (1): 471–92.
- Work TH, Hurlbut HS, Taylor RM (1953). "Isolation of West Nile virus from hooded crow and rock pigeon in the Nile delta". Proc. Soc. Exp. Biol. Med. 84 (3): 719–22. doi:10.3181/00379727-84-20764. PMID 13134268.
- Bernkopf H, Levine S, Nerson R (1953). "Isolation of West Nile virus in Israel". J. Infect. Dis. 93 (3): 207–18. doi:10.1093/infdis/93.3.207. PMID 13109233.
- Calisher CH (2000). "West Nile virus in the New World: appearance, persistence, and adaptation to a new econiche—an opportunity taken". Viral Immunol. 13 (4): 411–4. doi:10.1089/vim.2000.13.411. PMID 11192287.
- "West Nile virus". NIOSH. August 27, 2012.
- "Vertebrate Ecology". West Nile Virus. Division of Vector-Borne Diseases, CDC. 30 April 2009.
- Deas, Tia S; Bennett CJ; Jones SA; Tilgner M; Ren P; Behr MJ; Stein DA; Iversen PL; Kramer LD; Bernard KA; Shi PY (May 2007). "In vitro resistance selection and in vivo efficacy of morpholino oligomers against West Nile virus". Antimicrob Agents Chemother 51 (7): 2470–82. doi:10.1128/AAC.00069-07. PMC 1913242. PMID 17485503.
- Hayes EB, Sejvar JJ, Zaki SR, Lanciotti RS, Bode AV, Campbell GL (2005). "Virology, pathology, and clinical manifestations of West Nile virus disease". Emerging Infect. Dis. 11 (8): 1174–9. doi:10.3201/eid1108.050289b. PMC 3320472. PMID 16102303.
- Moskowitz DW, Johnson FE (2004). "The central role of angiotensin I-converting enzyme in vertebrate pathophysiology". Curr Top Med Chem 4 (13): 1433–54. doi:10.2174/1568026043387818. PMID 15379656.
- Arroyo, J.; Miller, C.; Catalan, J.; Myers, G. A.; Ratterree, M. S.; Trent, D. W.; Monath, T. P. (2004). "ChimeriVax-West Nile Virus Live-Attenuated Vaccine: Preclinical Evaluation of Safety, Immunogenicity, and Efficacy". Journal of Virology 78 (22): 12497–12507. doi:10.1128/JVI.78.22.12497-12507.2004. PMC 525070. PMID 15507637.
- Biedenbender, R.; Bevilacqua, J.; Gregg, A. M.; Watson, M.; Dayan, G. (2011). "Phase II, Randomized, Double-Blind, Placebo-Controlled, Multicenter Study to Investigate the Immunogenicity and Safety of a West Nile Virus Vaccine in Healthy Adults". Journal of Infectious Diseases 203 (1): 75–84. doi:10.1093/infdis/jiq003. PMC 3086439. PMID 21148499.
|Wikimedia Commons has media related to West Nile virus.|
- De Filette M, Ulbert S, Diamond M, Sanders NN (2012). "Recent progress in West Nile virus diagnosis and vaccination". Vet. Res. 43 (1): 16. doi:10.1186/1297-9716-43-16. PMC 3311072. PMID 22380523.
- "West Nile Virus". Division of Vector-Borne Diseases, U.S. Centers for Disease Control and Prevention (CDC).
- CDC—West Nile Virus—NIOSH Workplace Safety and Health Topic
- Recommendations for Protecting Laboratory, Field, and Clinical Workers from West Nile Virus Exposure
- West Nile Virus Resource Guide—National Pesticide Information Center
- Vaccine Research Center (VRC)—Information concerning WNV vaccine research studies
- Nature news article on West Nile paralysis
- CBC News Coverage of West Nile in Canada
- Gene mutation turned West Nile virus into killer disease among crows
- Virus Pathogen Database and Analysis Resource (ViPR): Flaviviridae
- Species Profile- West Nile Virus (Flavivirus), National Invasive Species Information Center, United States National Agricultural Library. Lists general information and resources for West Nile Virus.
- 3D macromolecular structures of the West Nile Virus archived in the EM Data Bank(EMDB)
- West Nile Virus—West Nile Encephalitis Brain Scans | https://en.wikipedia.org/wiki/West_Nile_fever |
4.40625 | The Earth-Moon System
The moon is the earth's nearest neighbor in space. In addition to its proximity, the moon is also exceptional in that it is quite massive compared to the earth itself, the ratio of their masses being far larger than the similar ratios of other natural satellites to the planets they orbit (though that of Charon and the dwarf planet Pluto exceeds that of the moon and earth). For this reason, the earth-moon system is sometimes considered a double planet. It is the center of the earth-moon system, rather than the center of the earth itself, that describes an elliptical orbit around the sun in accordance with Kepler's laws. It is also more accurate to say that the earth and moon together revolve about their common center of mass, rather than saying that the moon revolves about the earth. This common center of mass lies beneath the earth's surface, about 3,000 mi (4800 km) from the earth's center.The Lunar Month
The moon was studied, and its apparent motions through the sky recorded, beginning in ancient times. The Babylonians and the Maya, for example, had remarkably precise calendars for eclipses and other astronomical events. Astronomers now recognize different kinds of months, such as the synodic month of 29 days, 12 hr, 44 min, the period of the lunar phases, and the sidereal month of 27 days, 7 hr, 43 min, the period of lunar revolution around the earth.
As seen from above the earth's north pole, the moon moves in a counterclockwise direction with an average orbital speed of about 0.6 mi/sec (1 km/sec). Because the lunar orbit is elliptical, the distance between the earth and the moon varies periodically as the moon revolves in its orbit. At perigee, when the moon is nearest the earth, the distance is about 227,000 mi (365,000 km); at apogee, when the moon is farthest from the earth, the distance is about 254,000 mi (409,000 km). The average distance is about 240,000 mi (385,000 km), or about 60 times the radius of the earth itself. The plane of the moon's orbit is tilted, or inclined, at an angle of about 5° with respect to the ecliptic. The line dividing the bright and dark portions of the moon is called the terminator.
Due to the earth's rotation, the moon appears to rise in the east and set in the west, like all other heavenly bodies; however, the moon's own orbital motion carries it eastward against the stars. This apparent motion is much more rapid than the similar motion of the sun. Hence the moon appears to overtake the sun and rises on an average of 50 minutes later each night. There are many variations in this retardation according to latitude and time of year. In much of the Northern Hemisphere, at the autumnal equinox, the harvest moon occurs; moonrise and sunset nearly coincide for several days around full moon. The next succeeding full moon, called the hunter's moon, also shows this coincidence.
Although an optical illusion causes the moon to appear larger when it is near the horizon than when it is near the zenith, the true angular size of the moon's diameter is about 1/2°, which also happens to be the sun's apparent diameter. This coincidence makes possible total eclipses of the sun in which the solar disk is exactly covered by the disk of the moon. An eclipse of the moon occurs when the earth's shadow falls onto the moon, temporarily blocking the sunlight that causes the moon to shine. Eclipses can occur only when the moon, sun, and earth are arranged along a straight line—lunar eclipses at full moon and solar eclipses at new moon.
The gravitational influence of the moon is chiefly responsible for the tides of the earth's oceans, the twice-daily rise and fall of sea level. The ocean tides are caused by the flow of water toward the two points on the earth's surface that are instantaneously directly beneath the moon and directly opposite the moon. Because of frictional drag, the earth's rotation carries the two tidal bulges slightly forward of the line connecting earth and moon. The resulting torque slows the earth's rotation while increasing the moon's orbital velocity. As a result, the day is getting longer and the moon is moving farther away from the earth. The moon also raises much smaller tides in the solid crust of the earth, deforming its shape. The tidal influence of the earth on the moon was responsible for making the moon's periods of rotation and revolution equal, so that the same side of the moon always faces earth.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Astronomy: General | http://www.infoplease.com/encyclopedia/science/moon-the-earth-moon-system.html |
4.25 | You Are Here
Activity 1: What Is Privilege?
Activity time: 5 minutes
Materials for Activity
- Newsprint, markers and tape
Preparation for Activity
- Post two sheets of newsprint. Label one "Privileges" and the other "Skills."
Description of Activity
Gather the children. Tell them, in your own words:
Our Unitarian Universalist faith challenges us to recognize our privileges and to share them with others. We are also called to discover our gifts and skills, and then share them, too, in order to live a full life while contributing to our society.
Invite the group to explore the differences between privileges and skills. Say something like:
The talents, education, or access to information, resources, money and/or power that we have by chance of birth or geography are called "privileges." These are different from the skills and talents that we develop through practice. For example, having access to a piano, a piano teacher, and the time to take lessons are each privileges; being able to play a classical sonata comes from regular practice and that is a skill you learn. Now we are going to list what we understand to be our privileges and our skills.
Invite volunteers to contribute to both lists. Accept all suggestions. If an item is suggested both as a privilege AND a skill, just write it down. If necessary, suggest some of these ideas:
- Being picky about food (people who are hungry aren't picky)
- Having a bed to sleep on at night
- Having a warm home in the winter
- Having a stable home where people do not act violent
- Going to school
- Not living in a war zone
- Extra curricular activities and lessons that cost money
- Access to the Internet
- Toys (electronic games, especially)
- Learning the same language from birth that is used in your school.
- Earning good grades
- Learning a new sport and staying on the team
- Playing an instrument well
- Being a neat writer
- Building a large vocabulary.
When the list looks full, engage the group with some of these questions:
- Does anything on this list surprise you?
- Is there something you did not think is a privilege that someone else believes is?
- Have you ever thought about being privileged?
- Do you think being privileged is the same as being "spoiled?" What is the difference?
Keep the newsprint posted for use in Activity 2, Window/Mirror Panel - My Privilege. | http://www.uua.org/re/tapestry/children/windows/session11/143758.shtml |
4.21875 | Radioactivity is the emission of high energy particles through the natural phenomenon of the decay of unstable isotopes of chemical elements into more stable forms, which are called daughter products. This type of emission is generally called nuclear radiation.
The most common types of nuclear radiation are alpha and beta radiation, and the processes for each are respectively alpha decay and beta decay. There can also be gamma radiation associated with a nuclear decay. Alpha particles are helium nuclei (two protons and two neutrons); beta particles are high-energy electrons; gamma rays are high-energy photons. Alpha particles can normally be stopped by a sheet of paper or healthy human skin. Beta particles and gamma rays can penetrate one's body to cause great harm. Gamma radiation is also a form of electro-magnetic radiation, like X-rays or visible light. (Contemporary jargon refers to alpha and beta particles and gamma rays, though quantum mechanics makes the two actually the same.)
All forms of radioactivity follow the fundamental rules of mass and energy balance.
- Main article: Alpha decay
As stated above, an alpha particle is the nucleus of a Helium atom, i.e., two protons and two neutrons. This arrangement means the alpha particle has a charge of +2, and an atomic mass of 4, the symbol for which is .
For example, the most common isotope of Uranium is Uranium-238. The mass number, 238, is the sum of the number of protons and the number of neutrons. Since all Uranium atoms have 92 protons, there are 146 neutrons. The initial step in Uranium-238 decaying (eventually) into Lead-206 is an alpha decay:
Note that the proton count (92) is conserved, as is are the mass numbers (238). So the neutron count (146) is also balanced. No particles are created or destroyed.
This decay releases about 4.3 MeV of kinetic energy, in the form of the motion of the alpha particle. In chemical terms we would say that this decay is an exothermic reaction. The energy comes from the potential nuclear energy of the Uranium atom—the Uranium atom has a higher potential energy (by 4.3 Mev) than the sum of the potential energies of the Thorium and Helium atoms. A careful accounting of the atomic masses (remember, atomic mass is only approximately equal to mass number) will show that mass was lost, in accordance with E=mc².
- Main article: Beta decay
As stated above, a beta particle is an electron. This arrangement means the beta particle has a charge of -1, and an atomic mass of 0, the symbol for which is .
Strontium-90 undergoes beta decay to form Yttrium-90 in the following decay reaction:
As with the alpha decay, notice that the particle count is again conserved. The energy released by this decay is 0.55 MeV.
An interesting note about Strontium-90 is that is is a synthetic isotope (meaning, it is not found naturally occurring, but must be manufactured) that is a by-product of nuclear weapons explosions. In the 1950s and 1960s, it was common to test nuclear weapons by exploding them in the very high upper atmosphere. Unfortunately, this resulted in a large amount of Strontium-90 particles that eventually settled back to earth, contaminating grass lands. The grasses were eaten by cattle, and the cattle were eaten by humans.
Since Strontium is chemically very similar to Calcium (it is in the same column in the Periodic Table), any entrance of Strontium in the body will tend to replace the Calcium in our bones. In the case of Strontium-90, this meant that radioactive Strontium was now chemically bonded to our bones, and the 1970s saw a rise in bone cancer as a result. Fortunately, this type of testing was halted, and the half-life of Strontium-90 is a relatively short 28 years, meaning, at this point most of the synthetic, radioactive Strontium-90 produced by weapons testing has decayed out of the environment.
Energy of Radioactive Decay
The energies associated with radioactive decay, at least on the single atom level, are very, very small. The energies are so small, in fact, we use a special unit called the Electron Volt (eV) rather than the tradition units of Joules or Btu or Foot-pounds.
Just for a point of comparison, it takes about a minute to boil a cup of water in a 1000 Watt microwave. One Watt is equivalent to one Joule per second, so it takes 60,000 Joules of energy to boil a cup of water. But a single Joule of energy is the same as 6.24x1018 eV!
So even when a nuclear decay has an associated energy in the thousands (keV) or millions (MeV) of electron volts, we're still billions of factors away from having enough energy to boil a cup of water. The danger is not from a single atomic decay, but from many trillions of atomic decays occurring within rapid succession, in which case we do reach energies capable of producing serious burns on the skin.
In the early decades of the 20th century, when it was a newly discovered phenomenon, there was much confusion about it, leading to some strange hypotheses. No less an intellectual than H. G. Wells wrote strange things about it. In his 1909 novel Tono-Bungay, the narrator muses:
- To my mind radio-activity is a real disease of matter. Moreover, it is a contagious disease. It spreads. You bring those debased and crumbling atoms near others and those too presently catch the trick of swinging themselves out of coherent existence. It is in matter exactly what the decay of our old culture is in society, a loss of traditions and distinctions and assured reactions. ...I am haunted by a grotesque fancy of the ultimate eating away and dry-rotting and dispersal of all our world. So that while man still struggles and dreams his very substance will change and crumble from beneath him. I mention this here as a queer persistent fancy. Suppose, indeed, that is to be the end of our planet; no splendid climax and finale, no towering accumulation of achievements, but just—atomic decay!
This is not to say that radioactivity isn't dangerous, or that radioactive contamination isn't a tricky problem.
Notes and references
- The mass number is the (integer) number of nucleons. It is very close to the atomic mass of the isotope measured in amu, because the masses of protons and neutrons are very close, and the deviations (due to E=mc²) are small. So the terms are often used nearly interchangeably.
- Wells, H. G. (1909) Tono-Bungay, online Project Gutenberg text; search for text string "real disease" | http://www.conservapedia.com/Radioactive_decay |
4.03125 | If you like us, please share us on social media, tell your friends, tell your professor or consider building or adopting a Wikitext for your course.
HA ↔ H+ + A-
An acid, a proton donor, donates protons in water forming hydronium ions (protonated water, H3O+). A base, proton acceptor, removes protons from water forming hydroxide ions (deprotonated water, OH-) Using curved arrows to demonstrate the mechanism by which a proton is transferred from hydrochloric acid to the base water. The arrows demonstrate the movement of electrons but the key feature of the Bronsted Lowry reaction is the transfer of protons.
A lone pair from the base creates a new bond with an acidic proton and the electron pair originally linking the proton to the remainder of the acid shifts to becomes a lone pair on the departing conjugate base.
Water is a neutral compound ( # hydronium ions = # hydroxide ions via self-dissociation). Equilibrium constant Kw (self-ionization constant) describes this process at 25oC
H2O + H2O< Kw > H3O+ + OH-
Kw = [H3O+][OH-] = 10-14 mol2L-2
pH = the negative logarithm of the value of [H3O+]. The concentration of H3O+ in pure water = 10-7 mol L-1
pH = - log [H3O+]
pH in pure water = +7
pH>7 = basic
pH<7 = acidic
acidity of a general acid (HA) is conveyed by a general equation:
HA + H2O < K > H3O + A- K= [H3O+][A-] / [HA][H2O]
Acidity constant (Ka) = K[H2O]= [H3O+][A-] / [HA] mol L-1
like H3O+, Ka can be put to a logarithmic scale
pKa = -log Ka
pKa describes the pH of an acid at 50% dissociation. If pKa<1 = strong, pKa>4 = weak acid. Please refer to the chart below for pKa of common acids.
A- derived from acid HA, is referred to as the conjugate base
A- derived from base HA, is referred to as the conjugate acid
acid + base <---> conjugate base + conjugate acid
Conjugate acid and bases are inversely related:
strong acid = weak conjugate base
strong base = weak conjugate acid
Ex. HCl (strong acid) ↔ H+ + Cl- (weak conjugate base)
CH3OH (weak acid) ↔ H+ + CH3O- (strong conjugate base)
Relative strength of an acid (HA) and weakness of conjugate base can be estimated using three structural properties:
Summary: Basicity of A- decreases to the right and down the periodic table, acidity of HA increases to the right and down the periodic table.
Several molecules have the ability to act as acids or bases under differing conditions, thus they are Amphoteric.
ex. water, nitric acid, acetic acid
H3O+ <water accepts a proton (base) H2O water donates a proton (acid)> OH-
Lewis acid = electron pair acceptor
Lewis Base = electron pair donator
Lewis base share its lone pair electrons with a lewis acid to form a new covalent bond, thus can be expressed by an arrow moving in the direction of electron movement (base to acid)
Electrophiles and Nucleophiles interact through movement of an electron pair
Processes that exhibit very similar characteristics as acid-base reactions and are described using the same electron pushing arrows.
Electrophile "electron loving": An electron deficient atom, ion or molecule that has an affinity for an electron pair and will bond to a base or nucleophile. (all lewis acids are electrophiles)
Nucleophile "nucleus loving": An atom ion or molecule that has an electron pair that may be donated in bonding to an electrophile or lewis acid. (all nucleophiles are lewis bases)
The diagram below demonstrates the flow of electrons using electron pushing arrows:
Haloalkanes (compounds with carbon-halogen bonds) are general nucleophilic substitution reactions. Despite differing halogens and arrangement of substituents , all arrangements/combinations behave similarly allowing us to conclude that it is the actual presence of the carbon-halogen bond that controls the behavior of the haloalkane. The C-X bond is the functional group/controlling factor of reactivity.
CH3I + NH3 --> CH3NH3+ + I- | http://chemwiki.ucdavis.edu/Core/Organic_Chemistry/Fundamentals/Acids_and_Bases%3B_Electrophiles_and_Nucleophiles |
4.03125 | History of rail transport in India
- This article is part of the history of rail transport by country series.
The history of rail transport in India began in the mid-nineteenth century.
Prior to 1850, there were no railway lines in the country. This changed with the first railway in 1853. Railways were gradually developed, for a short while by the British East India Company and subsequently by the Colonial British Government, primarily to transport troops for their numerous wars, and secondly to transport cotton for export to mills in UK. Transport of Indian passengers received little interest till 1947 when India got freedom and started to develop railways in a more judicious manner.
By 1929, there were 66,000 km (41,000 mi) of railway lines serving most of the districts in the country. At that point of time, the railways represented a capital value of some £687 million, and carried over 620 million passengers and approximately 90 million tons of goods a year. The railways in India were a group of privately owned companies, mostly with British shareholders and whose profits invariably returned to Britain. The military engineers of the East India Company, later of the British Indian Army, contributed to the birth and growth of the railways which gradually became the responsibility of civilian technocrats and engineers. However, construction and operation of rail transportation in the North West Frontier Province and in foreign nations during war or for military purposes was the responsibility of the military engineers.
The first train in the country had run between Roorkee and Piran Kaliyar on December 22, 1851 to temporarily solve the then irrigation problems of farmers, large quantity of clay was required which was available in Piran Kaliyar area, 10 km away from Roorkee. The necessity to bring clay compelled the engineers to think of the possibility of running a train between the two points. In 1845, along with Sir Jamsetjee Jejeebhoy, Hon. Jaganath Shunkerseth (known as Nana Shankarsheth) formed the Indian Railway Association. Eventually, the association was incorporated into the Great Indian Peninsula Railway, and Jeejeebhoy and Shankarsheth became the only two Indians among the ten directors of the GIP railways. As a director, Shankarsheth participated in the very first commercial train journey in India between Bombay and Thane on 16 April 1853 in a 14 carriage long train drawn by 3 locomotives named Sultan, Sindh and Sahib. It was around 21 miles in length and took approximately 45 minutes.
A British engineer, Robert Maitland Brereton, was responsible for the expansion of the railways from 1857 onwards. The Calcutta-Allahabad-Delhi line was completed by 1864. The Allahabad-Jabalpur branch line of the East Indian Railway opened in June 1867. Brereton was responsible for linking this with the Great Indian Peninsula Railway, resulting in a combined network of 6,400 km (4,000 mi). Hence it became possible to travel directly from Bombay to Calcutta via Allahabad. This route was officially opened on 7 March 1870 and it was part of the inspiration for French writer Jules Verne's book Around the World in Eighty Days. At the opening ceremony, the Viceroy Lord Mayo concluded that "it was thought desirable that, if possible, at the earliest possible moment, the whole country should be covered with a network of lines in a uniform system".
By 1875, about £95 million (equal to £117 billion in 2012) were invested by British companies in Indian guaranteed railways. It later transpired that there was heavy corruption in these investments, on the part of both, members of the British Colonial Government in India, and companies who supplied machinery and steel in Britain. This resulted in railway lines and equipment costing nearly double what they should have costed.
By 1880 the network route was about 14,500 km (9,000 mi), mostly radiating inward from the three major port cities of Bombay, Madras and Calcutta. By 1895, India had started building its own locomotives and in 1896 sent engineers and locomotives to help build the Uganda Railways.
In 1900, the GIPR became a British government owned company. The network spread to the modern day states of Assam, Rajasthan, Telangana and Andhra Pradesh and soon various independent kingdoms began to have their own rail systems. In 1901, an early Railway Board was constituted, but the powers were formally invested under Lord Curzon. It served under the Department of Commerce and Industry and had a government railway official serving as chairman, and a railway manager from England and an agent of one of the company railways as the other two members. For the first time in its history, the Railways began to make a profit.
In 1907, almost all the rail companies were taken over by the government. The following year, the first electric locomotive made its appearance. With the arrival of World War I, the railways were used to meet the needs of the British outside India. With the end of the war, the railways were in a state of disrepair and collapse.
In 1920, with the network having expanded to 61,220 km, a need for central management was mooted by Sir William Acworth. Based on the East India Railway Committee chaired by Acworth, the government took over the management of the Railways and detached the finances of the Railways from other governmental revenues.
The growth of the rail network significantly decreased the impact of famine in India. According to Robin Burgess and Dave Donaldson, "the ability of rainfall shortages to cause famine disappeared almost completely after the arrival of railroads."
The period between 1920 and 1929 was a period of economic boom. Following the Great Depression, however, the company suffered economically for the next eight years. The Second World War severely crippled the railways. Trains were diverted to the Middle East and later, the Far East to combat the Japanese. Railway workshops were converted to ammunitions workshops and some tracks (such as Churchgate to Colaba in Bombay) were dismantled for use in war in other countries. By 1946 all rail systems had been taken over by the government.
In 1904, the idea to electrify the railway network was proposed by W.H White, chief engineer of the then Bombay Presidency government. He proposed the electrification of the two Bombay-based companies, the Great Indian Peninsula Railway and the Bombay Baroda and Central India Railway (now known as CR and WR respectively).
Both the companies were in favour of the proposal. However, it took another year to obtain necessary permissions from the British government and to upgrade the railway infrastructure in Bombay city. The government of India appointed Mr Merz as a consultant to give an opinion on the electrification of railways. But Mr Merz resigned before making any concrete suggestions, except the replacement of the first Vasai bridge on the BB&CI by a stronger one.
Moreover, as the project was in the process of being executed, the First World War broke out and put the brakes on the project. The First World War placed heavy strain on the railway infrastructure in India. Railway production in the country was diverted to meet the needs of British forces outside India. By the end of the war, Indian Railways were in a state of dilapidation and disrepair.
By 1920, Mr Merz formed a consultancy firm of his own with a partner, Mr Maclellan. The government retained his firm for the railway electrification project. Plans were drawn up for rolling stock and electric infrastructure for Bombay-Poona/Igatpuri/Vasai and Madras Tambaram routes.
The secretary of state of India sanctioned these schemes in October 1920. All the inputs for the electrification, except power supply, were imported from various companies in England.
And similar to the running of the first ever railway train from Bombay to Thane on April 16, 1853, the first-ever electric train in India also ran from Bombay. The debut journey, however, was a shorter one. The first electric train ran between Bombay (Victoria Terminus) and Kurla, a distance of 16 km, on February 3, 1925 along the city’s harbour route.
The section was electrified on a 1,500 volts DC. The opening ceremony was performed by Sir Leslie Wilson, the governor of Bombay, at Victoria Terminus station in presence of a very large and distinguished gathering.
India's first electric locos (two of them), however, had already made their appearance on the Indian soil much earlier. They were delivered to the Mysore Gold Fields by Bagnalls (Stafford) with overhead electrical equipment by Siemens as early as 1910.
Various sections on the railway network were progressively electrified and commissioned between 1925 to 1930.
In 1956, the government decided to adopt 25kV AC single-phase traction as a standard for the Indian Railways to meet the challenge of the growing traffic. An organisation called the Main Line Electrification Project, which later became the Railway Electrification Project and still later the Central Organisation for Railway Electrification, was established. The first 25kV AC traction section in India is Burdwan-Mughalsarai via the Grand Chord.
Corruption in British Indian Railways
Sweeney (2015) Describes the large scale corruption that existed in the financing of British Indian railways, from its commencement in 1850s when tracks were being laid out and later in its operation. The ruling colonial British government were too focussed on transporting goods for export to Britain, and hence did not use them to transport food instead to prevent famines such as the Great Bengal famines in 1905 and 1942. Indian economic development was never considered while deciding the rail network or places to be connected. It also resulted in the construction of many white elephants paid for by the natives, as commercial interests lobbied government officials with kickbacks. Government officials of the railways, especially ICS officials, and British nationals who participated in decision making such as James Mackay of Bengal were later rewarded after retirement with directorships in the City or the London headoffices and board rooms of these very so-called Indian railway companies, Poor resource allocation resulted in losses of hundreds of millions of pounds for Indians, including those in opportunity costs. Most shareholders of the railway companies set up were British. The head offices of most of these companies were in London, thus allowing Indian money to flow out of the country legally. Result, the railway debt made up nearly 50% of the Indian national debt from 1903 to 1945. Roberts and Minto spent large amounts trying to develop the Indian railways in the North west frontier province, resultign in large disproportionate losses. Guaranteed and subsidised companies were floated to run the railways, large guarantee payments were made despite there being a famine in Bengal. EIR, GIPR and Bombay Baroda (all operating in India and registered in London) had monopolies which generated profits, however these were never reinvested for the development of India.
Start of Independent Indian Railways
Following independence in 1947, India inherited a decrepit rail network. About 40 per cent of the railway lines were in the newly created Pakistan. Many lines had to be rerouted through Indian territory and new lines had to be constructed to connect important cities such as Jammu. A total of 42 separate railway systems, including 32 lines owned by the former Indian princely states existed at the time of independence spanning a total of 55,000 km. These were amalgamated into the Indian Railways. Since then, independent India has more than quadrupled the length of railway lines in the country.
In 1952, it was decided to replace the existing rail networks by zones. A total of six zones came into being in 1952. As India developed its economy, almost all railway production units started to be built indigenously. The Railways began to electrify its lines to AC. On 6 September 2003 six further zones were made from existing zones for administration purpose and one more zone added in 2006. The Indian Railways has now sixteen zones.
In 1985, steam locomotives were phased out. In 1987, computerization of reservation first was carried out in Bombay and in 1989 the train numbers were standardised to four digits. In 1995, the entire railway reservation was computerised through the railway's internet. In 1998, the Konkan Railway was opened, spanning difficult terrain through the Western Ghats. In 1984 Kolkata became the first Indian city to get a metro rail system, followed by the Delhi Metro in 2002, Bangalore's Namma Metro in 2011, the Mumbai Metro and Mumbai Monorail in 2014 and Chennai Metro in 2015 . Many other Indian cities are currently planning urban rapid transit systems.
- Dalrymple, William (4 March 2015). "The East India Company: The original corporate raiders". The Guardian. Retrieved 16 August 2015.
- Sandes, Lt Col E.W.C. (1935). The Military Engineer in India, Vol II. Chatham: The Institution of Royal Engineers.
- "Postindependence: from dominance to decline". http://www.britannica.com/. Britanica Portal. Retrieved 24 June 2014. External link in
- R.P. Saxena, Indian Railway History Timeline
- British investment in Indian railway reaches £100m by 1875
- Burgess, Robin; Donaldson, Dave (2010). "Can Openness Mitigate the Effects of Weather Shocks? Evidence from India's Famine Era.". American Economic Review 100 (2): 453 in pages 449–53. doi:10.1257/aer.100.2.449. Retrieved 28 July 2015.
- Sweenety, Stuart (2015). Financing India's Imperial Railways, 1875–1914. London: Routledge. pp. 186–188. ISBN 1317323777.
- Tharoor, Shashi. "How a Debate Was Won in London Against British Colonisation of India". NDTV News. Retrieved 16 August 2015.
- Andrew, W. P. (1884). Indian Railways. London: W H Allen.
- Awasthi, A. (1994). History and Development of Railways in India. New Delhi: Deep and Deep Publications.
- Bhandari, R.R. (2006). Indian railways : Glorious 150 Years (2nd ed.). New Delhi: Publications Division, Ministry of Information & Broadcasting, Govt. of India. ISBN 8123012543.
- Ghosh, S. (2002). Railways in India – A Legend. Kolkata: Jogemaya Prokashani.
- Government of India Railway Board (1919). History of Indian Railways Constructed and In Progress corrected up to 31st March 1918. India: Government Central Press.
- Hurd, John; Kerr, Ian J. (2012). India's Railway History: A Research Handbook. Handbook of Oriental Studies. Section 2, South Asia, 27. Leiden; Boston: Brill. ISBN 9789004230033.
- Huddleston, George (1906). History of the East Indian Railway. Calcutta: Thacker, Spink and Co.
- Kerr, Ian J. (1995). Building the Railways of the Raj. Delhi: Oxford University Press.
- Kerr, Ian J. (2001). Railways in Modern India. Oxford in India Readings. New Delhi; New York: Oxford University Press. ISBN 0195648285.
- Kerr, Ian J. (2007). Engines of Change: the railroads that made India. Engines of Change series. Westport, Conn, USA: Praeger. ISBN 0275985644.
- Khosalā, Guradiāla Siṅgha (1988). A History of Indian Railways. New Delhi: Ministry of Railways, Railway Board, Government of India. OCLC 311273060.
- Law Commission (England and Wales) (2007) PDF (1.62 MiB)
- Rao, M.A. (1999). Indian Railways (3rd ed.). New Delhi: National Book Trust, India. ISBN 8123725892.
- Sahni, Jogendra Nath (1953). Indian Railways: One Hundred Years, 1853 to 1953. New Delhi: Ministry of Railways (Railway Board). OCLC 3153177.
- Satow, M. & Desmond R. (1980). Railways of the Raj. London: Scolar Press.
- South Indian Railway Co. (1900). Illustrated Guide to the South Indian Railway Company, Including the Mayavaram-Mutupet and Peralam-Karaikkal Railways. Madras: Higginbotham.
- — (1910). Illustrated Guide to the South Indian Railway Company. London.
- — (2004) . Illustrated Guide to the South Indian Railway Company. Asian Educational Services. ISBN 81-206-1889-0.
- Vaidyanathan, K.R. (2003). 150 Glorious Years of Indian Railways. Mumbai: English Edition Publishers and Distributors (India). ISBN 8187853492.
- Westwood, J.N. (1974). Railways of India. Newton Abbot, Devon, UK; North Pomfret, Vt, USA: David & Charles. ISBN 071536295X.
- "History of the Indian railways in chronological order". IRFC server. Indian Railways Fan Club. Retrieved 2007-10-21.
- Roychoudhury, S. (2004). "A chronological history of India's railways". Retrieved 2007-10-21. | https://en.wikipedia.org/wiki/History_of_rail_transport_in_India |
4.15625 | As the sun heads toward its 2013 maximum, the corresponding increase in space weather may temporarily strip the radiation belts around Earth of their charged electrons. But a new study of data recorded by 11 independent spacecraft reveals that the deadly particles are blown into space rather than cast into our planet's atmosphere, as some scientists have suggested.
Streams of highly charged electrons zip through the Van Allen radiation belts circling Earth. When particles from the sun collide with the planet's magnetic field, which shields Earth from the worst effects, the resulting geomagnetic storms can decrease the number of dangerous electrons.
Where those particles go is something physicists have long puzzled over — and since they could wreak havoc on sensitive telecommunication satellites and pose a risk to astronauts in space, it's an important question, researchers say.
At the heart of the geomagnetic storm mystery are strange dips, known as dropouts, in the number of charged particles in the radiation belts. These lapses can happen multiple times per year, but when the sun is going through an active period — as it is now — the number can increase to several times per month, scientists involved in the new study explained. [Amazing auroras from geomagnetic storms]
Astronomers have previously suggested that the missing particles could have been ejected toward Earth, where they might have been absorbed by the atmosphere. This activity still could explain some of the loss, particularly that which occurs when no geomagnetic storm has been detected, but not all of it.
A team of scientists from the University of California, Los Angeles, observed a geomagnetic storm in January 2011 with a plethora of instruments. They noticed that as intense solar activity pushes against the outer edge of Earth's magnetic field on the daylight side, the lines can cross, allowing the damaging electrons to escape into space.
"Those particles are entirely lost," lead scientist Drew Turner told SPACE.com. The research is detailed in the Jan. 29 edition of the journal Nature Physics.
Although material ejected from the sun can deplete the Earth's outer radiation belt, it can also resupply the belt with more charged particles in only a few days, Turner said.
Previous studies have found that the volume of electrons can spike after a solar event. When the belts are first almost depleted, Turner's observations imply a larger influx than previously accounted for.
The team used 11 different satellites, including NASA's five Themis spacecraft and two weather satellites operated by the National Oceanic and Atmospheric Administration and the European Organization for the Exploitation of Meteorological Satellites, to study a small geomagnetic storm. The abundance of spacecraft allowed them to capture a complete picture of the interactions between Earth's magnetic field and the particles streaming from the sun.
"It's impossible to get the sense of the entire process with one pinpoint of information," Turner said.
He called the lineup of the various crafts "lucky."
The upcoming launch of NASA's Radiation Belt Storm Probes Mission (RBSP), scheduled for August 2012, may help to remove some elements of chance from further studies.
"RBSP will provide two more points of view with perfect instruments for radiation belt studies," he said. | http://www.space.com/14400-killer-electrons-radiation-belt-space.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+spaceheadlines+%28SPACE.com+Headline+Feed%29 |
4.15625 | Have you ever tried to compare rational numbers? Take a look at this dilemma.
Terry is studying the stock market. She notices that in one day, the stock that she was tracking has lost value. It decreased .5%. On the next day, it lost value again. This time decreasing .45.
Which day had the worst decrease? Comparing rational numbers will help you with this task.
To compare and order rational numbers, you should first convert each number to the same form so that they are easier to compare. Usually it will be easier to convert each number to a decimal. Then you can use a number line to help you order the numbers.
Take a look at this situation.
Place the following number on a number line in their approximate locations: 8%,18,0.8
Convert each number to a decimal.
All of the numbers are between 0 and 1. You can use place value to find the correct order of the numbers. Since 0.08 has a 0 in the tenths place, 8% is the least number. Since 0.125 has a 1 in the tenths place, 18 is the next greatest number. Since 0.8 has an 8 in the tenths place, it is the greatest number.
We wrote these three values on a number line. This is one way to show the different values. We can also use inequality symbols.
Inequality symbols are < less than, > greater than, ≤ less than or equal to, and ≥ greater than or equal to.
Here is another one.
Which inequality symbol correctly compares 0.29% to 0.029?
Change the percent to a decimal. Then use place value to compare the numbers.
Move the decimal point two places to the left.
Now compare the place value of each number. Both numbers have a 0 in the tenths place. 0.029 has a 2 in the hundredths place, while 0.0029 has a 0 in the hundredths place. So 0.0029 is less than 0.029.
Remember, the key to comparing and ordering rational numbers is to be sure that they are all in the same form. You want to have all fractions, all decimals or all percentages so that your comparisons are accurate. You may need to convert before you compare!!
Now let's go back to the dilemma from the beginning of the Concept.
To figure out this dilemma, you have to compare .5% and .45.
First, let's convert them both to percents.
.5% is already a percent.
.45 becomes 45%
Now let's compare.
.5% < 45%
The second day was definitely worse.
number that can be written in fraction form.
the set of whole numbers and their opposites.
number representing a part out of 100.
a decimal that has an ending even though many digits may be present.
a decimal that has an ending even though many digits may repeat.
a decimal that has no ending, pi or 3.14... is an example.
symbols used to compare numbers using < or >.
Here is one for you to try on your own.
Order the following rational numbers from least to greatest.
First, let's convert them all to the same form. We could use fractions,decimals or percents, but for this situation, let's use percents.
.5% stays the same.
Now we can easily order them. Be sure to write them as they first appeared.
This is our answer.
Khan Academy Compare and Order Rational Numbers
Directions: Compare each pair of rational numbers using < or >.
.34 −−−−− .87
−8 −−−−− −11
16 −−−−− 78
.45 −−−−− 50%
66% −−−−− 34
.78 −−−−− 77%
49 −−−−− 25%
.989898 −−−−− .35
.67 −−−−− 32%
.123000 −−−−− .87
Directions: Use the order of operations to evaluate the following expressions.
3x, when x is .50
4y, when y is 34
5x+1, when x is −12
6y−7, when y is 12
3x−4x, when x is −5
6x+8y, when x is 2 and y is −4 | http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts-Grade-8/r10/section/2.16/ |
4.09375 | Phosphorus is a finite (limited) resource which is relatively scarce and is not evenly distributed across the Earth. Only a few countries have significant reserves and these are (in the order of phosphate rock economic reserves): China, Morocco and Western Sahara, United States and Jordan.
Means of phosphorus production - other than mining - are unavailable because of its non-gaseous environmental cycle. The predominant source of phosphorus comes in the form of phosphate rock and in the past guano. According to some researchers, Earth's phosphorus reserves are expected to be completely depleted in 50–100 years and peak phosphorus to be reached in approximately 2030. Others suggest that supplies will last for several hundreds of years. The question is not settled and researchers in different fields regularly publish different estimates of the rock phosphate reserves.
The peak phosphorus concept is connected with the concept of planetary boundaries. Phosphorus, as part of biogeochemical processes, belongs to one of the nine "Earth system processes" which are known to have boundaries. As long as the boundaries are not crossed, they mark the "safe zone" for the planet.
Estimates of world phosphate reserves
The accurate determination of peak phosphorus is dependent on knowing the total world's phosphate reserves and the future demand for rock phosphate. In 2012, the United States Geological Survey (USGS) estimated that phosphorus reserves worldwide are 71 billion tons, while world mining production in 2011 was 0.19 billion tons and this has been taken to mean that there were enough reserves to last for at least 370 years and possibly a lot longer. These reserve figures are widely used, but others suggest that there has been little external verification of the estimate.
There are many different views as to the extent of world phosphate resources. The International Fertilizer Development Center (IFDC) in a 2010 report estimated that global phosphate rock resources would last for several hundred years. This is disputed by a recent review which concludes that the IFDC report "presents an inflated picture of global reserves, in particular those of Morocco, where largely hypothetical and inferred resources have simply been converted to “reserves". Another review suggests that it is "not very likely" that there would be significant depletion of extractable rock phosphate by 2100.
"Reserves" refer to the amount assumed recoverable at current market prices and "resources" mean total estimated amounts in the Earth's crust. Phosphorus comprises 0.1% by mass of the average rock (while, for perspective, its typical concentration in vegetation is 0.03% to 0.2%), and consequently there are quadrillions of tons of phosphorus in Earth's 3 * 1019 ton crust, albeit at predominantly lower concentration than the deposits counted as reserves from being inventoried and cheaper to extract.
Economists have pointed out that there do not need to be shortages of rock phosphate to cause price fluctuations, as these have already occurred due to various demand and supply side factors.
Rock phosphate shortages (or just significant price increases) would have a big impact on the world's food security. Many agricultural systems depend on supplies of inorganic fertiliser, which use rock phosphate. Unless systems change, shortages of rock phosphate could lead to shortages of inorganic fertiliser, which could in term affect crop growth and cause starvation.
Exhaustion of guano reserves
In 1609 Garcilaso de la Vega wrote the book Comentarios Reales in which he described many of the agricultural practices of the Incas prior to the arrival of the Spaniards and introduced the use of guano as a fertilizer. As Garcilaso described, the Incas near the coast harvested guano. In the early 1800s Alexander von Humboldt introduced guano as a source of agricultural fertilizer to Europe after having discovered it on islands off the coast of South America. It has been reported that, at the time of its discovery, the guano on some islands was over 30 meters deep. The guano had previously been used by the Moche people as a source of fertilizer by mining it and transporting it back to Peru by boat. International commerce in guano didn't start until after 1840. By the start of the 20th century guano had been nearly completely depleted and was eventually overtaken with the discovery of superphosphate.
Phosphorus conservation and recycling
A huge amount of phosphorus is transferred from the soil in one location to another as food is transported across the world, taking the phosphorus it contains with it. Once consumed by humans, it can end up in the local environment (in the case of open defecation which is still widespread on a global scale) or in rivers or the ocean via sewage systems and sewage treatment plants in the case of cities connected to sewer systems. An example of one such crop in South America that takes up large amounts of phosphorus is soy. At the end of its journey, the phosphorus often ends up in rivers in Europe and the USA.
In an effort to postpone the onset of peak phosphorus several methods of reducing and reusing phosphorus are in practice, such as in agriculture and in sanitation systems. The Soil Association, the UK organic agriculture certification and pressure group, issued a report in 2010 "A Rock and a Hard Place" encouraging more recycling of phosphorus. One potential solution to the shortage of phosphorus is greater recycling of human and animal wastes back into the environment.
Reducing agricultural runoff and soil erosion can slow the frequency with which farmers have to reapply phosphorus to their fields. Agricultural methods such as no-till farming, terracing, contour tilling, and the use of windbreaks have been shown to reduce the rate of phosphorus depletion from farmland. These methods are still dependent on a periodic application of phosphate rock to the soil and as such methods to recycle the lost phosphorus have also been proposed. Perennial vegetation, such as grassland or forest is much more efficient in its use of phosphate than arable land. Strips of grassland and or forest between arable land and rivers can greatly reduce losses of phosphate and other nutrients.
Integrated farming systems which use animal sources to supply phosphorus for crops do exist at smaller scales, and application of the system to a larger scale is a potential alternative for supplying the nutrient, although it would require significant changes to the widely adopted modern crop fertilizing methods.
The oldest method of recycling phosphorus is through the reuse of animal manure and human excreta in agriculture. Via this method, phosphorus in the foods consumed are excreted, and the animal or human excreta are subsequently collected and re-applied to the fields. Although this method has maintained civilizations for centuries the current system of manure management is not logistically geared towards application to crop fields on a large scale. At present, manure application could not meet the phosphorus needs of large scale agriculture. Despite that, it is still an efficient method of recycling used phosphorus and returning it to the soil.
Sewage treatment plants that have an enhanced biological phosphorus removal step produce a sewage sludge that is rich in phosphorus. Various processes have been developed to extract phosphorus from sewage sludge directly, from the ash after incineration of the sewage sludge or from other products of sewage sludge treatment. This includes the extraction of phosphorus rich materials such as struvite from waste processing plants. The struvite can be made by adding magnesium to the waste. Some companies such as Ostara in Canada and NuReSys in Belgium are already using this technique to recover phosphate. Ostara has eight operating plants worldwide.
Research on phosphorus recovery methods from sewage sludge has been carried out in Sweden and Germany since around 2003, but the technologies currently under development are not yet cost effective, given the current price of phosphorus on the world market.
- Cordell, Dana; Drangert, Jan-Olof; White, Stuart (2009). "The story of phosphorus: Global food security and food for thought". Global Environmental Change 19 (2): 292–305. doi:10.1016/j.gloenvcha.2008.10.009. ISSN 0959-3780.
- Rosemarin, A. (2010). Peak Phosphorus, The Next Inconvenient Truth? - 2nd International Lecture Series on Sustainable Sanitation, World Bank, Manila, October 15, 2010.
- Neset, Tina-Simone S.; Cordell, Dana (2011). "Global phosphorus scarcity: identifying synergies for a sustainable future". Journal of the Science of Food and Agriculture 92 (1): 2–6. doi:10.1002/jsfa.4650.
- Lewis, Leo (23 June 2008). "Scientists warn of lack of vital phosphorus as biofuels raise demands" (PDF). Times Online.
- IFDC.org - IFDC Report Indicates Adequate Phosphorus Resources, Sep-2010
- Edixhoven, J. D.; Gupta, J.; Savenije, H. H. G. (2014). "Recent revisions of phosphate rock reserves and resources: a critique". Earth System Dynamics 5 (2): 491–507. doi:10.5194/esd-5-491-2014. ISSN 2190-4987.
- Rockström, J., W. Steffen, K. & 26 others (2009) Planetary boundaries: exploring the safe operating space for humanity. Ecology and Society 14(2): 32.
- U.S. Geological Survey Phosphate Rock
- Sutton, M.A.; Bleeker, A.; Howard, C.M.; et al. (2013). Our Nutrient World: The challenge to produce more food and energy with less pollution. Centre for Ecology and Hydrology, Edinburgh on behalf of the Global Partnership on Nutrient Management and the International Nitrogen Initiative. ISBN 978-1-906698-40-9. External link in
- Gilbert, Natasha (8 October 2009). "The disappearing nutrient". Nature 461: 716–718. doi:10.1038/461716a.
- Van Vuuren, D.P.; Bouwman, A.F.; Beusen, A.H.W. (2010). "Phosphorus demand for the 1970–2100 period: A scenario analysis of resource depletion". Global Environmental Change 20 (3): 428–439. doi:10.1016/j.gloenvcha.2010.04.004. ISSN 0959-3780.
- U.S. Geological Survey Phosphorus Soil Samples
- Abundance of Elements
- American Geophysical Union, Fall Meeting 2007, abstract #V33A-1161. Mass and Composition of the Continental Crust
- Heckenmüller, M.; Narita, D.; Klepper, G. (2014). "Global availability of phosphorus and its implications for global food supply: An economic overview" (PDF). Kiel Working Paper, No. 1897. Retrieved May 2015.
- Amundson, R.; Berhe, A. A.; Hopmans, J. W.; Olson, C.; Sztein, A. E.; Sparks, D. L. (2015). "Soil and human security in the 21st century". Science 348 (6235): 1261071–1261071. doi:10.1126/science.1261071. ISSN 0036-8075.
- Pollan, Michael (11 April 2006). The Omnivore's Dilemma: A Natural History of Four Meals. Penguin Press. ISBN 1-59420-082-3.
- Leigh, G. J. (2004). The World's Greatest Fix: A History of Nitrogen and Agriculture. Oxford University Press. ISBN 0-19-516582-9.
- Skaggs, Jimmy M. (May 1995). The Great Guano Rush: Entrepreneurs and American Overseas Expansion. St. Martin's Press. ISBN 0-312-12339-6.
- EOS magazine, May 2013
- soilassociation.org - A rock and a hard place, Peak phosphorus and the threat to our food security, 2010
- Burns, Melinda (10 February 2010). "The Story of P(ee)". Miller-McCune. Retrieved 2 February 2012.
- Udawatta, Ranjith P.; Henderson, Gray S.; Jones, John R.; Hammer, David (2011). "Phosphorus and nitrogen losses in relation to forest, pasture and row-crop land use and precipitation distribution in the midwest usa". Journal of Water Science 24 (3): 269–281.
- Sartorius, C., von Horn, J., Tettenborn, F. (2011). Phosphorus recovery from wastewater – state-of-the-art and future potential. Conference presentation at Nutrient Recovery and Management Conference organised by International Water Association (IWA) and Water Environment Federation (WEF) in Florida, USA
- Hultman, B., Levlin, E., Plaza, E., Stark, K. (2003). Phosphorus Recovery from Sludge in Sweden - Possibilities to meet proposed goals in an efficient, sustainable and economical way. | https://en.wikipedia.org/wiki/Peak_phosphorus |
4.0625 | 1 Answer | Add Yours
The absence of Southern members of Congress allowed the Northern Republicans (and Democrats) to act in the economic interests of the North during the Civil War. The main impact of this was to allow the Congress to pass laws that helped to develop the west.
Before the war, the North and South could not agree on developing the west. Of course, the South wanted slavery to be allowed while the North did not. This blocked any real agreement on what to do. With the Southerners out of the way, Congress developed the west. In 1862, it passed three laws that were very important in this. It passed the Pacific Railroad Acts, the Homestead Act, and the Morrill Land Grant Act. These laws helped to build the railroads that brought settlers west. They helped to lure settlers with the promise of cheap land. They helped to create colleges that would help develop new and better agricultural techniques.
By doing these things, the Congress was able to help to open the west to white settlement and economic development.
We’ve answered 301,426 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/describe-development-north-after-civil-war-how-did-377366 |
4.21875 | - Volcano List
- Learn More
- All About Volcanoes
- Kids Only!
- Adventures and Fun
- Sitemap (Under Construction)
Ozone is a gas made of three oxygen atoms. Ozone is bluish in color and harmful to breathe. Most of the Earth's ozone (about 90%) is in the stratosphere. The stratosphere is a layer in the atmosphere from about 10km to about 50km in altitude. Ozone is important because it absorbs specific wavelengths of ultraviolet radiation that are particularly harmful to living organisms. The ozone layer prevents most of this harmful radiation from reaching the ground.
As concern grew over depletion of ozone in the stratosphere scientists examined the role of volcanoes. They noted that the gases emitted by most eruptions never leave the troposphere, the layer in the atmosphere from the surface to about 10km.
Hydrogen chloride released by volcanoes can cause drastic reductions in ozone if concentrations reach high levels (about 15-20 ppb by volume)(Prather and others, 1984). As the El Chichon eruption cloud was spreading, the amount of HCl in the cloud increased by 40% (Mankin and Coffey, 1984). This increase represents about 10% of the global inventory of HCl in the stratosphere. Other large eruptions (Tambora, Krakatau, and Agung) may have released almost ten-times more HCl into the stratosphere than the amount of chlorine commonly present in the stratosphere (Pinto and others, 1989). At least two factors reduce the impact of HCl, chlorine appears to be preferentially released during low-levels of volcanic activity and thus may be limited to the troposphere, where it can be scrubbed by rain. Hydrogen chloride may also condense in the rising volcanic plume, again to be scrubbed out by rain or ice. Lack of HCl in ice cores with high amounts of H2SO4 (from large eruptions) may indicate ambient stratospheric conditions are extremely efficient at removing HCl. Thus, most HCl never has the opportunity to react with ozone. No increase in stratospheric chlorine was observed during the 1991 eruption of Mt. Pinatubo.
Volcanoes account for about 3% of chlorine in the stratosphere. Methyl chloride produces about 15% of the chlorine entering the stratosphere. The remaining 82% of stratospheric chlorine comes from man-made sources, mostly in the form of chlorofluorocarbons.
Although volcanic gases do not play a direct role in destroying ozone they may play a harmful indirect role. Scientists have found that particles, or aerosols, produced by major volcanic eruptions accelerate ozone destruction. The particles themselves do not directly destroy ozone but they do provide a surface upon which chemical reactions can take place. This enhances chlorine-driven ozone depletion. Fortunately, the effects from volcanoes are short lived and after two or three years, the volcanic particles settle out of the atmosphere.
Study of ozone amounts before and after the 1991 eruption of Mt. Pinatubo show that there were significant decreases in lower stratospheric ozone (Grant and others, 1994). The amount of ozone in the 16-28 km region was some reduced by 33% compared to pre-eruption amounts. A similar reduced amount of ozone was measured in the summer of 1992. | http://volcano.oregonstate.edu/ozone-destruction |
4.15625 | Motivation Teacher Resources
Find Motivation educational ideas and activities
Showing 1 - 20 of 149 resources
How to Become Self Motivated
Students demonstrate their self discipline and motivation by brainstorming their own personal self discipline goals. In this self discipline lesson, students analyze examples of self discipline and self control by completing a worksheet...
5th - 6th Social Studies & History
Reading Poetry in the Middle Grades
While this first appears to be a description of 20 poetry activities, it is actually the introduction, rationale, and explanation of the activities and one sample lesson plan for "Nothing Gold Can Stay" by Robert Frost. After a copy of...
6th - 8th English Language Arts CCSS: Designed
Understanding Behaviors Required to Maintain Employment
Now that your upper grader has a job, you need to teach him how to keep it. Discuss appropriate workplace behavior such as teamwork, initiative, and self-motivation. Also bridge the topic of what is and what isn't ethical behavior and...
9th - 12th 21st Century Skills
Introduction PPS Writing Units of Study
Imagine a year-long writing plan aligned to Common Core standards. Here it is! This resource packet, the first in the series of units, introduces the plan, provides an overview of the research-based approach, and a discussion of the key...
4th English Language Arts
All About Me, My Family and Friends
Pupils use general skills and strategies of the writing process to show their role in their family, school, friendships, the community and the world. They demonstrate their self-motivation and increasing responsibility for their own...
1st - 2nd English Language Arts
Is an Extended School Day the Right Choice for Middle School Students?
Should the school day be extended? Talk about a controversial topic! Before engaging in a fortified conversation about this topic, class members examine a chart that summarizes the views of business and political leaders, teachers,...
7th - 10th English Language Arts
Tangent to a Circle From a Point
Learners see application of construction techniques in a short but sophisticated problem. Combining the properties of inscribed triangles with tangent lines and radii makes a nice bridge between units, a way of using information about...
9th - 10th Math CCSS: Designed
Heroes and Heroines: King David, Julius Caesar, Cleopatra and Napoleon
Students identify and examine four heroes from history and imaginative literature. They discuss the characteristics of a hero and share perceptions of what makes a hero. By comparing and analyzing a few historical and literary figures,...
10th - 12th English Language Arts | http://www.lessonplanet.com/lesson-plans/motivation |
4.1875 | Sickle Cell Anemia Teacher Resources
Find Sickle Cell Anemia educational ideas and activities
Showing 1 - 20 of 120 resources
Exploring Structure and Function in Biological Systems
High schoolers examine different levels of organization in biological systems for structure and function relationships. In this biological systems lesson, students use Internet resources to look at structure and function in the eye, the...
9th - 12th Science
Dragon Genetics ~ Independent Assortment and Genetic Linkage
Imagine a pair of dragons that produce offspring and determine the percentage of the hatchlings have wings and large antlers. This fantastic activity draws genetics learners in, introduces them to alleles, meiosis, phenotypes, genotypes,...
9th - Higher Ed Science
Hereditary Defects: Down Syndrome and Sickle Cell Anemia
Young scholars solve problems like the following examples: 1. If you have 10,000 women, age 30, who have babies and one in 900 of these births will result in a Down syndrome baby, how many will have this disease? 2. 5,000 babies are...
5th - 8th Science
Raven Chapter 13 Guided Notes: Patterns of Inheritance
In this short space, it would be impossible to describe the breadth of this seven-page genetics worksheet. Geared toward AP or college biology learners, they explore not only the basic vocabulary and concepts, but also the Law of...
11th - Higher Ed Science
From Gene to Protein ~ Transcription and Translation
Translate the process of protein synthesis to your molecular biologists with this instructional activity. It consists of reading, completing a table as a summary, comprehension questions, and a modeling activity for both transcription...
7th - 12th Science
The Making of the Fittest: Natural Selection in Humans
Sickle cell disease only occurs when both parents contribute the trait, and mostly in those of African descent. Where did it come from? How did it evolve? Tony Allison, a molecular biologist, noticed a connection between sickle cell and...
14 mins 8th - Higher Ed Science CCSS: Designed
Protecting Athletes with Genetic Conditions
Should school and professional teams test athletes for sickle cell trait? Will it protect them by providing knowledge or lead to discrimination by not allowing them to participate in sports? After learning about this genetic disorder,...
9th - 12th Science CCSS: Designed
Allele Frequencies and Sickle Cell Anemia Lab
Learners investigate how selective forces like food, predation and diseases affect evolution. In this genetics lesson, students use red and white beans to simulate the effect of malaria on allele frequencies. They analyze data collected...
7th - 9th Science
The Making of the Fittest: Got Lactase? The Co-evolution of Genes and Culture
Got milk? Only two cultures have had it long enough to develop the tolerance of lactose as an adult. Learn how the responsible genes evolved along with the cultures that have been consuming milk. This rich film is supplied with a few...
15 mins 8th - Higher Ed Science CCSS: Designed
From Gene to Protein-Transcription and Translation
Students identify the different steps involved in DNA transcription. In this genetics lesson, students model the translation process. They watch a video on sickle cell anemia and explain how different alleles create this condition.
9th - 10th Science
Sickle Cell Anemia - Hope from Gene Therapy
Can gene therapy treat sickle cell anemia? Genetics geniuses draw a Punnett square for this painful disease and then view a video about current gene therapy research. Then they discuss ethical questions related to this type of treatment....
10 mins 8th - 12th Science | http://www.lessonplanet.com/lesson-plans/sickle-cell-anemia |
4.03125 | A View from Emerging Technology from the arXiv
First Observation of Gravitational Waves Is ‘Imminent’
Astronomers have underestimated the strength of gravitational waves, which means they ought to be able to see them now, say astrophysicists
Gravitational waves are ripples in the fabric of spacetime caused by cataclysmic events such as neutron stars colliding and black holes merging.
The biggest of these events, and the easiest to see, are the collisions between supermassive black holes at the centre of galaxies. So an important question is how often these events occur.
Today, Sean McWilliams and a couple of pals at Princeton University say that astrophysicists have severely underestimated the frequency of these upheavals. Their calculations suggest that galaxy mergers are an order of magnitude more frequent than had been thought. Consequently, collisions between supermassive black holes must be more common too.
That has important implications. There is an intense multimillion-dollar race to be first to spot gravitational waves, but if the researchers are correct, the evidence may already be in the data collected by the first observatories.
The evidence that McWilliams and co rely on comes from various measurements of galaxy size and mass. This data shows that in the last six billion years, galaxies have roughly doubled in mass and quintupled in size.
Astrophysicists know that there has been very little star formation in that time, so the only way for galaxies to grow is by merging, an idea borne out by various computer simulations of the way galaxies must evolve. These simulations suggest that galaxy mergers must be far more common than astronomers had thought.
That raises an interesting prospect—that the supermassive black holes at the centre of these galaxies must be colliding more often. McWilliams and co calculate that black hole mergers must be between 10 and 30 times more common than expected and that the gravitational-wave signals from these events are between three and five times stronger.
That has important implications for astronomers’ ability to see these signals. Astrophysicists are intensely interested in these waves since they offer an entirely new way to study the cosmos.
One way to spot them is to measure the way the waves stretch and squeeze space as they pass through Earth, a process that requires precise laser measurements inside machines costing hundreds of millions of dollars.
The most sensitive of these machines is called LIGO, the Laser Interferometer Gravitational Wave Observatory in Washington state, which is currently being upgraded; it is not due to reach its design sensitivity until 2018-19.
Another method is to monitor the amazingly regular radio signals that pulsars produce and listen for the way these signals are distorted by the stretching and squeezing of space as gravitational waves pass through the solar system.
So-called pulsar timing arrays largely rely on existing kit for monitoring pulsars and so are significantly cheaper than bespoke detectors.
Of course, everyone has assumed that the more sensitive bespoke detectors such as LIGO will be the first to see gravitational waves, although not until the end of the decade.
But all that changes if gravitational waves turn out to be stronger than thought. And that’s exactly what McWilliams and co predict. In fact, they say the waves are so strong that current pulsar monitoring kit ought to be capable of spotting them. “We calculate … that the gravitational-wave signal may already be detectable with existing data from pulsar timing arrays,” say the Princeton team.
Pulsar timing arrays are also increasing in sensitivity. If McWilliams and co are correct, this makes the detection of gravitational waves a near certainty within just a few years. Their most pessimistic estimate is that pulsar timing arrays will have nailed this by 2016.
“We expect a detection by 2016 with 95% confidence,” they say.
That’s an extraordinary prediction and a rather refreshing one, given the general reluctance in science to nail your colours to a particular mast.
The first direct observation of gravitational waves will be one of the most important breakthroughs ever made in astronomy; the discoverer a shoo-in for a Nobel Prize.
So the stakes could not be higher in this race, and this time there is a distinct chance of an outside bet taking the honours.
Ref: http://arxiv.org/abs/1211.4590: The Imminent Detection Of Gravitational Waves From Massive Black-Hole Binaries With Pulsar Timing Arrays | https://www.technologyreview.com/s/507811/astrophysicists-on-the-verge-of-spotting-gravitational-waves/ |
4.03125 | Marco Polo 1254-1324
Italian merchant and traveller.
A Venetian merchant, Polo was among the first travellers to the East to provide an account of that region in a Western language. His narrative, The Travels of Marco Polo, met with skepticism and disbelief upon its circulation, as the region had only previously been written about in legends such as those of Alexander the Great, and by William of Rubrouck, a French Franciscan friar who wrote a missionary's account of his trip to Mongolia upon his return to France in 1255. Many of Polo's previously unsubstantiated observations and claims were, however, confirmed by later travellers and his work is now regarded by most scholars as the first accurate description of Asia by a European.
Polo was born in Venice in 1254 while his father Nicolo and his uncle Maffeo were away on a trading voyage during which they first met Kublai Khan, the Emperor of Mongolia; they did not return to Italy until Polo was about fifteen years old. The elder Polos had been instructed by the Khan to solicit the Pope for Christian missionaries to be escorted back to the Emperor's court. The Polos were forced to wait until 1270 for a new pope, Gregory X, to be elected due to the failure of the cardinals to name a successor to Pope Clement IV following his death in 1268. Polo, now about seventeen years old, accompanied his father and uncle to Mongolia following the trio's presentation of the Khan's request to Pope Gregory X. After reaching the Khan's court and being employed in his service for a number of years, the Polos desired to return to Italy. The Khan was unwilling to release the merchants from his service, but complied with their request when they agreed to travel to Persia to escort a princess betrothed to the Khan's grand-nephew. The Polos completed their mission and then began their journey home, arriving in Venice in 1295 after a twenty-four-year absence. Soon after his return, Polo was appointed to command a ship in the war between the city-states of Venice and Genoa. His fleet was defeated and he arrived in Genoa as a political prisoner on October 16, 1298. Polo was released from prison in July of 1299. He lived in Venice until his death at the age of seventy.
While he was in prison, Polo had dictated his account of his travels to a fellow prisoner, Rustichello. Scholars believe that Polo's original manuscript was translated, copied, and widely circulated following his release from prison in 1299. The language of the original manuscript is unknown and a topic of much debate. In 1320, Pipino made a Latin translation of Polo's Travels from a version written in an Italian dialect, implying that this dialect version was Polo's original. Giovanni Battista Ramusio, an Italian geographer whose edition of Polo's work was published in 1559 in a collection of travel accounts known as Navigationi et viaggi, believed that the original manuscript was written in Latin. Others have maintained that Polo's work was written in French or Franco-Italian. Another source of contention among critics regards the role played by Rustichello in the writing of Travels. Some critics argue that Rustichello copied a draft already completed by Polo, or transcribed the work as Polo dictated it. Others believe that Rustichello served as a collaborator and editor, rewording Polo's phrasing and adding commentary of his own. The manuscript regarded by many critics as the most complete is a French version known as fir. 1116, published by the French Geographic Society in 1824. Some critics have contended that fr. 1116 is a true transcript of Polo's dictation to Rustichello, but other scholars such as N. M. Penzer have argued that it does not represent a direct copy of Polo's work, asserting that another manuscript (referred to by Polian scholars as Z) may antedate fr. 1116. Other groups of Polian manuscripts studied for their authenticity and their relation to the original manuscript include the Grégoire version, which critics have suggested is perhaps an elaborated version of fr. 1116; the Tuscan Recension, an early fourteenth-century Tuscan translation of a Franco-Italian version of the original manuscript; and the Venetian Recension, a group of over eighty manuscripts which have been translated into the Venetian dialect. Travels was first translated into English by John Frampton in 1579. In the nineteenth century, scholars such as William Marsden, Henry Yule, and Luigi Benedetto began to publish revisions of the work that utilized information from several manuscripts to produce a more comprehensive edition of Travels. Since the original manuscript of Travels has never been recovered, the search for the version most directly descended from it continues.
Polo's The Travels of Marco Polo, his first and only known work, provides readers with a detailed description of late thirteenth-century Asia. The work includes an account of Nicolo's and Maffeo's first journey to the residence of Kublai Khan; geographical descriptions of the countries between the Black Sea, the China Sea, and the Indian Ocean; and historical narratives about the Mongolian Empire's rise and expansion. Polo's Travels also relates the author's personal adventures and his association with Kublai Khan. Polo's tone throughout the narration is that of a commercial traveller reporting what he has seen and heard. He employs the same straightforward style in discussing his own experiences as he does when he relates hearsay, which he identifies as such. Polo focused his observations on aspects such as trade, political and military structures, religious customs relating to marriage and burial of the dead, and the architecture and layout of cities. His matter-of-fact tone in the narrative emphasizes the presentation of facts over the discussion of theories or ideas.
Polo's first critics, the friends and relatives to whom he verbally related his journey, refused to believe what they considered to be outrageous exaggerations or pure fiction. Yet Polo's story was appealing for its entertainment value and was rapidly copied and distributed following its initial transcription. His account did not gain credibility until after his death, when further exploration proved many of his claims. Some modern critics have faulted Polo for omitting certain subjects from the narrative: for example, Polo never mentioned tea, the practice of binding women's feet, or the Great Wall, all of which were unheard of in Europe. Polo's defenders have countered that since the merchant had lived in Mongolia for twenty-four years, subjects that would seem strange or exotic to Europeans had become commonplace in Polo's life. Others have contended that such omissions could also have been made consciously or accidentally by translators of the work. Travels is often criticized on stylistic grounds as well, for instance for shifting back and forth between first and third person narration, but scholars attribute many such faults to the numerous times the work has been translated and copied. Although many critics assess Travels as simply a merchant's pragmatic account of his stay in the East, some, like Mary Campbell, maintain that the work offers the authority of first-hand experience and argue that its value extends beyond providing enjoyment through vicarious experience in that it transforms the myth of the East into reality.
The Travels of Marco Polo (translated by John Frampton) 1579
The Travels of Marco Polo, the Venetian (translated and edited by William Marsden) 1818
The Book of Ser Marco Polo (translated and edited by Henry Yule) 1871
The Travels of Marco Polo (translated by Aldo Ricci from the Italian edition by L. F. Benedetto) 1931
Marco Polo: The Description of the World (translated and edited by A. C. Moule and P. Pelliot) 1938
The Adventures of Marco Polo (translated by Richard J. Walsh) 1948
The Travels of Marco Polo (translated by Robert Latham) 1958
The Travels of Marco Polo (translated by Teresa Waugh from the Italian edition by Maria Bellonci) 1984
SOURCE: "The Epistle Dedicatorie," in The Travels of Marco Polo, edited by N. M. Penzer, translated by John Frampton, The Argonaut Press, 1929, pp. 1-2.
[In the following dedication to his 1579 translation of The Travels of Marco Polo, Frampton states his reasons for committing the manuscript to print in English.]
To the right worshipfull Mr. Edward Dyar Esquire, Iohn Frampton wisheth prosperous health and felicitie.
Having lying by mee in my chamber (righte Worshipful) a translation of the great voiage & lõg trauels of Paulus Venetus the Venetian, manye Merchauntes, Pilots, and Marriners, and others of dyuers degrees,...
(The entire section is 555 words.)
SOURCE: "Marsden's Marco Polo," in The Quarterly Review, Vol. XXI, No. XLI, January-April, 1819, pp. 177-96.
[In the following review, the anonymous critic praises Marsden's edition of Polo's book, provides an overview of the author's life, and comments on the accuracy of the narrative.]
'It might have been expected,' Mr. Marsden says, 'that in ages past, a less tardy progress would have been made in doing justice to the intrinsic merits of a work (whatever were its defects as a composition) that first conveyed to Europeans a distinct idea of the empire of China, and, by shewing its situation together with that of Japan (before entirely unknown) in respect to the great...
(The entire section is 8911 words.)
SOURCE: An introduction to The Travels of Marco Polo, the Venetian, edited by Thomas Wright, translated by William Marsden, George Bell & Sons, 1890, pp. ix-xx-viii.
[In Wright's 1854 introduction to his revision of William Marsden's translation of The Travels of Marco Polo, Wright offers an overview of Polo's travels and discusses the history of Polo's manuscript.]
So much has been written on the subject of the celebrated Venetian traveller of the middle ages, Marco Polo, and the authenticity and credibility of his relation have been so well established, that it is now quite unnecessary to enter into this part of the question; but the reader of the...
(The entire section is 6944 words.)
SOURCE: "Yule's Edition of Marco Polo," in The Edinburgh Review, Vol. CXXXV, No. CCLXXV, January, 1872, pp. 1-36.
[In the following excerpt, Rawlinson praises Yule's translation of Polo's book, noting that he blends several earlier texts in his edition in order to best present "what the author said, or would have desired to say."]
The publication of Colonel Yule's Marco Polo is an epoch in geographical literature. Never before, perhaps, did a book of travels appear under such exceptionally favourable auspices; an editor of a fine taste and ripe experience, and possessed with a passion for curious medieval research, having found a publisher willing to gratify that...
(The entire section is 1562 words.)
SOURCE: "The Book of Marco Polo," in The Nation, New York, Vol. XXI, No. 530, August 26, 1875, pp. 135-37, 152-53.
[In the essay that follows, Marsh discusses Yule's edition of Polo's book and comments on the traveler's "reputation for veracity" as well as his collaboration with his fellow prisoner Rustichello, here called Rusticiano.]
When Marsden published his learned edition of the Travels of Marco Polo in 1818, it was supposed that he had so nearly exhausted all the possible sources of illustration of his author that future editors would find little or no matter for new commentaries. And when in 1865 Pauthier gave to the world a substantially authentic text for the...
(The entire section is 3111 words.)
SOURCE: "Marco Polo's Explorations and Their Influence upon Columbus," in The New England Magazine, Vol. VI, No. 6, August 1892, pp. 803-15.
[In the following excerpt, Margesson briefly comments on the influence Polo's narrative had on Christopher Columbus.]
While Columbus never directly mentions Polo, his hopes and fancies and the deeds of his late years are wholly incomprehensible if he had no acquaintance with the writings of the great Venetian. In a Latin version of Marco Polo, printed at Antwerp about 1485, preserved in the Columbina at Seville, there are marginal notes in the handwriting of Columbus, and he may have become familiar with the work while living in...
(The entire section is 424 words.)
SOURCE: An introduction to Dawn of Modern Geography: A History of Exploration and Geographical Science, Vol. III, Oxford at the Clarendon Press, 1906, pp. 1-14.
[In the following excerpt, Beazley provides an overview of the surge in geographic exploration that occurred from the mid-thirteenth to the early years of the fifteenth century—providing context for Polo's explorations.]
Our conquest of the world we live in has a long history; in that history there are many important epochs, eras in which a vital advance was made, wherein the whole course of events was modified; but among such epochs there are few of greater importance, of deeper suggestiveness, and of more...
(The entire section is 3535 words.)
SOURCE: An introduction to The Travels of Marco Polo, edited by N. M. Penzer, translated by John Frampton, The Argonaut Press, 1929, pp. xi-lx.
[In the following excerpt, Penzer provides a detailed analysis of the history of the Polian manuscripts.]
The existence of an Elizabethan translation of the Travels of Marco Polo will probably come as a surprise to the majority of readers. This is not to be wondered at when we consider that only three copies of the work in question are known to exist, and that it has never been reprinted.
The very rarity of the book would be of itself sufficient excuse for reprinting it, but in the present case there are other...
(The entire section is 6958 words.)
SOURCE: "Marco Polo and His Book," in Proceedings of the British Academy, Vol. XX, 1934, pp. 181-201.
[In the following excerpt from a lecture delivered before the British Academy, Ross gives a brief account of Polo's journey and his narrative, and introduces several new theories regarding Polo's manuscript.]
The outstanding geographical event of the thirteenth century was the discovery of the overland route to the Far East. The silk of China had long been known to the West, but the route by which it travelled was unknown, for European merchants had not ventured beyond certain Asiatic ports, whither the silk, like other Oriental wares, was conveyed by caravan....
(The entire section is 5409 words.)
SOURCE: "The 'Lost' Toledo Manuscript of Marco Polo," in Speculum, Vol. XII, No. 4, October, 1937, pp. 458-63.
[In the following essay, Herriott discusses the superiority of a fifteenth-century Polian manuscript believed to have been lost.]
In 1559 the first attempt at a critical edition of Marco Polo appeared in Venice in a volume entitled Secondo volume delle Navigation et Viaggi nel quale si contengono l'Historia delle cose de Tartari, et diuersi fatti de loro Imperatori, descritta da M. Marco Polo Gentilhuomo Venetiano, et da Haiton Armeno. The first volume of this collection of travels had been published in 1550, and the third volume in 1556. The editor of the...
(The entire section is 2376 words.)
SOURCE: "The Immortal Marco," in The New Statesman & Nation, Vol. XVI, No. 400, October 22, 1938, pp. 606-07.
[In the following essay, Power discusses Polo's popular and literary reputation, arguing that his work is "a masterpiece of reporting."]
I once knew a master at a famous public school (which shall be nameless) who was under the impression that Marco Polo was a kind of game. I did not question his qualifications for imparting culture to the young, for he had in his day been a noted blue and, as the saying goes, first things come first. But I have been reminded of him by the almost simultaneous appearance of the first two volumes of a magnificent edition of...
(The entire section is 1531 words.)
SOURCE: "The Literary Precursors," in Marco Polo's Precursors, The Johns Hopkins Press, 1943, pp. 1-15.
[In the following essay, Olschki explores the influence of the poetic history of Alexander the Great on Polo's book.]
Until about the middle of the thirteenth century, when the first missionaries set out "ad Tartaros," there prevailed in the Western world a profound and persistent ignorance of Central and Eastern Asia, an ignorance partially mitigated by a few vague and generic notions in which remote reminiscences of distant places and peoples were mingled with old poetic and mythical fables. The Tartar invasion of Eastern and Central Europe in 1241 did not alter or...
(The entire section is 2783 words.)
SOURCE: An introduction to Masterworks of Travel and Exploration: Digests of 13 Great Classics, edited by Richard D. Mallery, Doubleday & Company, Inc., 1948, pp. 3-12.
[In the following excerpt, Mallery discusses the appeal of Polo's The Book of Marco Polo in the context of the travel narrative genre.]
Travel narratives, through the ages, reflect the character and predilections of the era in which they are composed. Very often they help to determine the special character of the age. They appeal, of course, primarily to that sense of wonder which is found, to a greater or less extent, in all periods. What we know of the fascination exerted upon young and old...
(The entire section is 2670 words.)
SOURCE: An introduction to The Travels of Marco Polo, translated by Ronald Latham, Penguin Books, 1958, pp. vii-xxix.
[In the following excerpt, Latham examines Rusticello's contribution to Polo's book and asserts that, while Polo's observations in other fields tend to be conservative, his remarks on the "human geography" of the places he visited are outstanding.]
The book most familiar to English readers as The Travels of Marco Polo was called in the prologue that introduced it to the reading public at the end of the thirteenth century a Description of the World (Divisament dou Monde). It was in fact a description of a surprisingly large part of the world—from the...
(The entire section is 3167 words.)
SOURCE: "Politics and Religion in Marco Polo's Asia," in Marco Polo's Asia: An Introduction to His "Description of the World" Called "Il milione, " University of California Press, 1960, pp. 178-210.
[In the following essay, Olschki analyzes the accuracy of Polo's observations regarding Asian religion and politics in the thirteenth century.]
Marco Polo's intention of conferring upon his journey the character of a religious mission is immediately evident in the first part of his book. Ecclesiastical and pious motives abound, from the moment when the three Venetians procured some oil from the lamp of the Holy Sepulcher in Jerusalem and departed with the Pope's blessing...
(The entire section is 8614 words.)
SOURCE: "Epilogue," in Marco Polo, Venetian Adventurer, University of Oklahoma Press, 1967, pp. 233-64.
[In the following excerpt, Hart examines the impact of Polo's book on the sciences of geography and cartography.]
Messer Marco Polo's reputation for veracity as an author suffered greatly during his lifetime, for his contemporaries (with very few exceptions) could not and did not accept his book seriously. Their ignorance and bigotry, their belief in and dependence on the ecclesiastical pseudogeography of the day, their preconceived ideas of the unvisited parts of the earth, as well as the inherited legends and utter nonsense to which the medieval mind clung with a...
(The entire section is 1387 words.)
SOURCE: "Merchant and Missionary Travels," in The Witness and the Other World: Exotic European Travel Writing, 400-1600, Cornell, 1988, pp. 87-121.
[In the following excerpt, Campbell discusses methods of description and narration employed by Polo, suggesting that "the being'' that Polo has given to the East in his book "is the body of the West's desire."]
In the works of Marco Polo and the Franciscan friar William of Rubruck, the experiencing narrator born and bred in the pilgrimage accounts meets the fabulous and relatively unprescribed East of Wonders [of the East] and the Alexander romances. One might expect this encounter between the eyewitness and...
(The entire section is 9223 words.)
Baker, J. N. L. "The Middle Ages." In A History of Geographical Discovery and Exploration, pp. 34-57. New York: Cooper Square Publishers, 1967.
Discusses the advances made by Polo, his father, and his uncle in the field of geographical exploration.
Brendon, J. A. "Marco Polo." In Great Navigators … Discoverers, pp. 29-38. Freeport, N.Y.: Books for Libraries Press, 1930.
Provides an overview of Polo's life and travels.
Clark, William R. "Explorers of Old." In Explorers of the World, pp. 10-39. London: Aldus Books, 1964.
(The entire section is 227 words.) | http://www.enotes.com/topics/marco-polo/critical-essays |
4 | Pronunciation of English ⟨th⟩
|History and description of|
|Development of vowels|
|Development of consonants|
In English, the digraph ⟨th⟩ represents in most cases one of two different phonemes: the voiced dental fricative /ð/ (as in this) and the voiceless dental fricative /θ/ (thing). More rarely, it can stand for /t/ (Thailand, Thames) or, in some dialects, even the cluster /tθ/ (eighth). It can also be a sequence rather than a digraph, as in the /t.h/ of lighthouse.
- 1 Phonetic realization
- 2 Phonology and distribution
- 3 History of the English phonemes
- 4 History of the digraph
- 5 See also
- 6 Notes
- 7 References
In standard English, the phonetic realization of the dental fricative phonemes shows less variation than for many other English consonants. Both are pronounced either interdentally, with the blade of the tongue resting against the lower part of the back of the upper teeth and the tip protruding slightly or alternatively with the tip of the tongue against the back of the upper teeth. The interdental position might also be described as "apico-" or "lamino-dental". These two positions may be free variants, but for some speakers they are complementary allophones, the position behind the teeth being used when the dental fricative stands in proximity to an alveolar fricative, as in clothes (/ðz/) or myths (/θs/). Lip configuration may vary depending on phonetic context. The vocal folds are abducted. The velopharyngeal port is closed. Air forced between tongue surface and cutting edge of the upper teeth (interdental) or inside surface of the teeth (dental) creates audible frictional turbulence.
The difference between /θ/ and /ð/ is normally described as a voiceless-voiced contrast, as this is the aspect native speakers are most aware of. However, the two phonemes are also distinguished by other phonetic markers. There is a difference of energy (see: Fortis and lenis), the fortis /θ/ being pronounced with more muscular tension than the lenis /ð/. Also, /θ/ is more strongly aspirated than /ð/, as can be demonstrated by holding a hand a few centimeters in front of the mouth and noticing the differing force of the puff of air created by the articulatory process.
As with many English consonants, a process of assimilation can result in the substitution of other speech sounds in certain phonetic environments. Most surprising to native speakers, who do this subconsciously, is the use of [n] and [l] as realisations of /ð/ in the following phrases:
- join the army: /ˈdʒɔɪn ðiː ˈɑːmi/ → [ˈdʒɔɪn niː ˈɑːmi]
- fail the test: /feɪl ðə ˈtɛst/ → [feɪl lə ˈtɛst]
/θ/ and /ð/ can also be lost through elision. In rapid speech, sixths may be pronounced like six. Them may be contracted to 'em, and in this case the contraction is often indicated in writing.
- In some areas such as London and northern New Zealand, and in some dialects including African American Vernacular English, many people realise the phonemes /θ/ and /ð/ as [f] and [v], respectively. Although traditionally stigmatised as typical of a Cockney accent, this pronunciation is fairly widespread, especially when immediately surrounded by other fricatives for ease of pronunciation, and has recently been an increasingly noticeable feature of the Estuary English accent of South East England. It has in at least one case been transferred into standard English as a neologism: a bovver boy is a thug, a "boy" who likes "bother" (fights). Joe Brown and his Bruvvers was a Pop group of the 1960s. The song "Fings ain't wot they used t'be" was the title song of a 1959 Cockney comedy. Similarly, a New Zealander from the northernmost parts of the country might state that he or she is from "Norfland".
- Note that at least in Cockney, word-initial /ð/ (as opposed to its voiceless counterpart /θ/) can never be labiodental. Instead, it is realized as any of [ð, ð̞, d, l, ʔ], or is dropped altogether.
- Many speakers of African American Vernacular English, Caribbean English, Liberian English, Nigerian English, Philadelphia English, and Philippine English (along with other Asian English varieties) pronounce the fricatives /θ, ð/ as alveolar stops [t, d]. Similarly but still distinctly, many speakers of New York City English, Chicago English, Boston English, Indian English, Newfoundland English, and Hiberno-English use the dental stops [t̪, d̪] (typically distinct from alveolar [t, d]) instead of, or in free variation with, [θ, ð].
- In Cockney, the th-stopping may occur in case of word-initial /ð/ (but not its voiceless counterpart /θ/).
- In rarer or older varieties of African American Vernacular English, /θ/ may be pronounced [s] after a vowel and before another consonant, as in bathroom [ˈbæsɹum].
- Th-alveolarization is a process that occurs in some African varieties of English where the dental fricatives /θ, ð/ merge with the alveolar fricatives /s, z/. It is an example of assibilation.
- It is often parodied as ubiquitous to French- and German-speaking learners of English, but is widespread among many foreign learners of English, because the dental fricative "th" sounds are not very common among world languages.
- In many varieties of Scottish English, /θ/ becomes [h] word initially and intervocalically. It is a stage in the process of lenition.
- Th-debuccalization occurs mainly in Glasgow and across the Central Belt. A common example is [hɪŋk] for think. This feature is becoming more common in these places over time, but is still variable. In word final position, [θ] is used, as in standard English.
- The existence of local [h] for /θ/ in Glasgow complicates the process of th-fronting there, a process which gives [f] for historical /θ/. Unlike in the other dialects with th-fronting, where [f] solely varies with [θ], in Glasgow, the introduction of th-fronting there creates a three-way variant system of [h], [f] and [θ].
- Use of [θ] marks the local educated norms (the regional standard), while use of [h] and [f] instead mark the local non-standard norms. [h] is well known in Glasgow as a vernacular variant of /θ/ when it occurs word-initially and intervocalically, while [f] has only recently risen above the level of social consciousness.
- Given that th-fronting is a relatively recent innovation in Glasgow, it was expected that linguists might find evidence for lexical diffusion for [f] and the results found from Glasgow speakers confirm this. The existing and particular lexical distribution of th-debuccalization imposes special constraints on the progress of th-fronting in Glasgow.
- In accents with th-debuccalization, the cluster /θr/ becomes [hr] giving these dialects a consonant cluster that doesn't occur in other dialects. The replacement of /θr/ with [hr] leads to pronunciations like:
- three - [hri]
- throw - [hro]
- through, threw - [hrʉ]
- thrash - [hraʃ]
- thresh - [hrɛʃ]
- thrown, throne - [hron]
- thread - [hrɛd]
- threat - [hrɛt]
Children generally learn the less marked phonemes of their native language before the more marked ones. In the case of English-speaking children, /θ/ and /ð/ are often among the last phonemes to be learnt, frequently not being mastered before the age of five. Prior to this age, many children substitute the sounds [f] and [v] respectively. For small children, fought and thought are therefore homophones. As British and American children begin school at age four and five respectively, this means that many are learning to read and write before they have sorted out these sounds, and the infantile pronunciation is frequently reflected in their spelling errors: ve fing for the thing.
Children with a lisp, however, have trouble distinguishing /θ/ and /ð/ from /s/ and /z/ respectively in speech, using a single /θ/ or /ð/ pronunciation for both, and may never master the correct sounds without speech therapy. The lisp is a common speech impediment in English.
Foreign learners may have parallel problems. In English popular culture the substitution of /z/ for /ð/ is a common way of parodying a French accent, but in fact learners from very many cultural backgrounds have difficulties with English dental fricatives, usually caused by interference with either sibilants or stops. Words with a dental fricative adjacent to an alveolar sibilant, such as clothes, truths, fifths, sixths, anesthetic, etc., are commonly very difficult for foreign learners to pronounce.
A popular advertisement for Berlitz language school plays on the difficulties Germans may have with dental fricatives.
Phonology and distribution
In modern English, /θ/ and /ð/ bear a phonemic relationship to each other, as is demonstrated by the presence of a small number of minimal pairs: thigh:thy, ether:either, teeth:teethe. Thus they are distinct phonemes (units of sound, differences in which can affect meaning), as opposed to allophones (different pronunciations of a phoneme having no effect on meaning). They are distinguished from the neighbouring labiodental fricatives, sibilants and alveolar stops by such minimal pairs as thought:fought/sought/taught and then:Venn/Zen/den.
The vast majority of words in English with ⟨th⟩ have /θ/, and almost all newly created words do. However, the constant recurrence of the function words, particularly the, means that /ð/ is nevertheless more frequent in actual use.
The distribution pattern may be summed up in the following rule of thumb which is valid in most cases: in initial position we use /θ/ except in certain function words; in medial position we use /ð/ except for certain foreign loan words; and in final position we use /θ/ except in certain verbs. A more detailed explanation follows.
- Almost all words beginning with a dental fricative have /θ/.
- A small number of common function words (the Middle English anomalies mentioned below) begin with /ð/. The words in this group are:
- 1 definite article: the
- 4 demonstratives: this, that, these, those
- 2 personal pronouns each with multiple forms: thou, thee, thy, thine, thyself; they, them, their, theirs, themselves, themself
- 7 adverbs and conjunctions: there, then, than, thus, though, thence, thither (though in America thence and thither may be pronounced with initial /θ/)
- Various compound adverbs based on the above words: therefore, thereupon, thereby, thereafter, thenceforth, etc.
- A few words have initial ⟨th⟩ for /t/ (e.g. Thomas): see below.
- Most native words with medial ⟨th⟩ have /ð/.
- Between vowels: heathen, fathom; and the frequent combination -ther-: bother, brother, dither, either, father, Heather, lather, mother, other, rather, slither, southern, together, weather, whether, wither, smithereens; Caruthers, Gaithersburg, Netherlands, Witherspoon, and similar compound names where the first component ends in '-ther' or '-thers'. But Rutherford has either /ð/ or /θ/.
- Preceded by /r/: Worthington, farthing, farther, further, northern.
- Followed by /r/: brethren.
- A few native words have medial /θ/:
- The adjective suffix -y normally leaves terminal /θ/ unchanged: earthy, healthy, pithy, stealthy, wealthy; but worthy and swarthy have /ð/.
- Compound words in which the first element ends or the second element begins with ⟨th⟩ frequently have /θ/, as these elements would in isolation: bathroom, Southampton; anything, everything, nothing, something.
- The only other native words with medial /θ/ would seem to be brothel and Ethel.
- Most loan words with medial ⟨th⟩ have /θ/.
- From Greek: Agatha, anthem, atheist, Athens, athlete, cathedral, Catherine, Cathy, enthusiasm, ether, ethics, ethnic, lethal, lithium, mathematics, method, methyl, mythical, panther, pathetic, sympathy
- From Latin: author, authority (though in Latin these had /t/; see below). Also names borrowed from or via Latin: Bertha, Gothic, Hathaway, Othello, Parthian
- From Celtic languages: Arthur (Welsh has /θ/ medially: /ærθɨr/); Abernathy, Abernethy
- From Hebrew: Ethan, Jonathan, Bethlehem, Bethany, leviathan, Bethel
- From German: Luther, as an anglicized spelling pronunciation (see below).
- Loanwords with medial /ð/:
- Greek words with the combination -thm-: algorithm, logarithm, rhythm. The word asthma may be pronounced /ˈæzðmə/ or /ˈæsθmə/, though here the ⟨th⟩ is nowadays usually silent.
- A few words have medial ⟨th⟩ for /t/ or /th/ (e.g. lighthouse): see below.
- Nouns and adjectives
- Nouns and adjectives ending in a dental fricative usually have /θ/: bath, breath, cloth, froth, health, hearth, loath, sheath, sooth, tooth/teeth, width, wreath.
- Exceptions are usually marked in the spelling with -⟨the⟩: tithe, lathe, lithe with /ð/.
- blithe can have either /ð/ or /θ/. booth has /ð/ in England but /θ/ in America.
- Verbs ending in a dental fricative usually have /ð/, and are frequently spelled -⟨the⟩: bathe, breathe, clothe, loathe, scathe, scythe, seethe, sheathe, soothe, teethe, tithe, wreathe, writhe. Spelled without ⟨e⟩: mouth (verb) nevertheless has /ð/.
- froth has /θ/ whether as a noun or as a verb.
- The verb endings -s, -ing, -ed do not change the pronunciation of a ⟨th⟩ in the final position in the stem: bathe has /ð/, therefore so do bathed, bathing, bathes; frothing has /θ/. Likewise clothing used as a noun, scathing as an adjective etc.
- The archaic word ending "-eth" has /θ/.
- with has either /θ/ or /ð/ (see below), as do its compounds: within, without, outwith, withdraw, withhold, withstand, wherewithal, etc.
- Plural ⟨s⟩ after ⟨th⟩ may be realised as either /ðz/ or /θs/:
- Some plural nouns ending in ⟨ths⟩, with a preceding vowel, have /ðz/, although the singulars always have /θ/; however a variant in /θs/ will be found for many of these: baths, mouths, oaths, paths, sheaths, truths, wreaths, youths exist in both varieties; clothes always has /ðz/ (if not pronounced /kloʊz/, the traditional pronunciation).
- Others have only /θs/: azimuths, breaths, cloths, deaths, faiths, Goths, growths, mammoths, moths, myths, smiths, sloths, zeniths, etc. This includes all words in 'th' preceded by a consonant (earths, hearths, lengths, months, widths, etc.) and all numeric words, whether preceded by vowel or consonant (fourths, fifths, sixths, sevenths, eighths /eɪtθs/, twelfths, fifteenths, twentieths, hundredths /hʌndrədθs/, thousandths).
- Booth has /ð/ in the singular and hence /ðz/ in the plural for most speakers in England. In American English it has /θ/ in the singular and /θs/ or /ðz/ in the plural. This pronunciation also prevails in Scotland.
In pairs of related words, an alternation between /θ/ and /ð/ is possible, which may be thought of as a kind of consonant mutation. Typically [θ] appears in the singular of a noun, [ð] in the plural and in the related verb: cloth /θ/, clothes /ð/, to clothe /ð/. This is directly comparable to the /s/-/z/ or /f/-/v/ alternation in house, houses or wolf, wolves. It goes back to the allophonic variation in Old English (see below), where it was possible for ⟨þ⟩ to be in final position and thus voiceless in the basic form of a word, but in medial position and voiced in a related form. The loss of inflections then brought the voiced medial consonant to the end of the word. Often a remnant of the old inflection can be seen in the spelling in the form of a silent ⟨e⟩, which may be thought of synchronically as a marker of the voicing.
Regional differences in distribution
The above discussion follows Daniel Jones' English Pronouncing Dictionary, an authority on standard British English, and Webster's New World College Dictionary, an authority on American English. Usage appears much the same between the two. Regional variation within standard English includes the following:
- The final consonant in with is pronounced /θ/ (its original pronunciation) in northern Britain, but /ð/ in the south, though some speakers of Southern British English use /θ/ before a voiceless consonant and /ð/ before a voiced one. A 1993 postal poll of American English speakers showed that 84% use /θ/, while 16% have /ð/ (Shitara 1993). (The variant with /ð/ is presumably a sandhi development.)
- In Scottish English, /θ/ is found in many words which have /ð/ further south. The phenomenon of nouns terminating in /θ/ taking plurals in /ðz/ does not occur in the north. Thus the following have /θs/: baths, mouths (noun), truths. Scottish English does have the termination /ðz/ in verb forms, however, such as bathes, mouths (verb), loathes, and also in the noun clothes, which is a special case, as it has to be clearly distinguished from cloths. Scottish English also has /θ/ in with, booth, thence etc., and the Scottish pronunciation of thither, almost uniquely, has both /θ/ and /ð/ in the same word. Where there is an American-British difference, the North of Britain generally agrees with America on this phoneme pair.
History of the English phonemes
Proto-Indo-European (PIE) had no dental fricatives, but these evolved in the earliest stages of the Germanic languages. In Proto-Germanic, /ð/ and /θ/ were separate phonemes, usually represented in Germanic studies by the symbols *đ and *þ.
- *đ (/ð/) was derived by Grimm's law from PIE *dʰ or by Verner's law (i.e. when immediately following an unstressed syllable) from PIE *t.
- *þ (/θ/) was derived by Grimm's law from PIE *t.
In West Germanic, the Proto-Germanic *đ shifted further to *d, leaving only one dental fricative phoneme. However, a new [ð] appeared as an allophone of /θ/ in medial positions by assimilation of the voicing of the surrounding vowels. [θ] remained in initial and presumably in final positions (though this is uncertain as later terminal devoicing would in any case have eliminated the evidence of final [ð]). This West Germanic phoneme, complete with its distribution of allophones, survived into Old English. In German and Dutch, it shifted to a /d/, the allophonic distinction simply being lost. In German, West Germanic *d shifted to /t/ in what may be thought of as a chain shift, but in Dutch, *þ, *đ and *d merged into a single /d/.
The whole complex of Germanic dentals, and the place of the fricatives within it, can be summed up in this table:
|PIE||Proto-Germanic||West Germanic||Old English||German||Dutch||Notes|
|*t||*þ||*[þ]||[θ]||/d/||/d/||Original *t in initial position, or in final position after a stressed vowel|
|*[đ]||[ð]||Original *t in medial position after a stressed vowel|
|*đ||*d||/d/||/t/||Original *t after an unstressed vowel|
|*dʰ||Original *dʰ in all positions|
|*d||*t||*t||/t/||/s/ or /ts/||/t/||Original *d in all positions|
Thus English inherited a phoneme /θ/ in positions where other West Germanic languages have /d/ and most other Indo-European languages have /t/: English three, German drei, Latin tres.
In Old English, the phoneme /θ/, like all fricative phonemes in the language, had two allophones, one voiced and one voiceless, which were distributed regularly according to phonetic environment.
- [ð] (like [v] and [z]) was used between two voiced sounds (either vowels or voiced consonants).
- [θ] (like [f] and [s]) was spoken in initial and final position, and also medially if adjacent to another unvoiced consonant.
Development up to Modern English
|This section does not cite any sources. (July 2010)|
The most important development on the way to modern English was the investing of the existing distinction between [ð] and [θ] with phonemic value. Minimal pairs, and hence the phonological independence of the two phones, developed as a result of three main processes.
- In early Middle English times, a group of very common function words beginning with /θ/ (the, they, there, etc.) came to be pronounced with /ð/ instead of /θ/. Possibly this was a sandhi development; as these words are frequently found in unstressed positions they can sometimes appear to run on from the preceding word, which may have resulted in the dental fricative being treated as though it were word-internal.
- English has borrowed many words from Greek, including a vast number of scientific terms. Where the original Greek had the letter ⟨θ⟩ (theta), English retained the Late Greek pronunciation /θ/, regardless of phonetic environment (thermometer, methyl, etc.). In a few words of Indian origin, such as thug, ⟨th⟩ represents Sanskrit थ (/tʰ/) or ठ (/ʈʰ/), usually pronounced /θ/ (but occasionally /t/) in English.
- English has lost its original verb inflections. When the stem of a verb ends with a dental fricative, this was usually followed by a vowel in Old English, and was therefore voiced. It is still voiced in modern English, even though the verb inflection has disappeared leaving the /ð/ at the end of the word. Examples are to bathe, to mouth, to breathe.
Other changes which affected these phonemes included a shift /d/ → /ð/ when followed by unstressed suffix -er. Thus Old English fæder became modern English father; likewise mother, gather, hither, together, weather (from mōdor, gaderian, hider, tōgædere, weder). In a reverse process, Old English byrþen and morþor or myþra become burden and murder (compare the obsolete words burthen and murther).
Dialectally, the alternation between /d/ and /ð/ sometimes extends to other words, as bladder, ladder, solder with /ð/. On the other hand, some dialects retain original d, and extend it to other words, as brother, further, rather. The Welsh name Llewelyn appears in older English texts as Thlewelyn (Rolls of Parliament (Rotuli parliamentorum) I. 463/1, King Edward I or II), and Fluellen (Shakespeare, Henry V). Th also occurs dialectally for wh, as in thirl, thortleberry, thorl, for whirl, whortleberry, whorl. Conversely, Scots has whaing, whang, white, whittle, for thwaing, thwang, thwite, thwittle.
The old verb inflection -eth (Old English -eþ) was replaced by -s (he singeth → he sings), not a sound shift but a completely new inflection, the origin of which is still being debated. Possibilities include a "de-lisping" (since s is easier to pronounce there than th), or displacement by a nonstandard English dialect.
History of the digraph
⟨th⟩ for /θ/ and /ð/
Though English speakers take it for granted, the digraph ⟨th⟩ is in fact not an obvious combination for a dental fricative. The origins of this have to do with developments in Greek.
Proto-Indo-European had an aspirated /dʱ/ which came into Greek as /tʰ/, spelled with the letter theta. In the Greek of Homer and Plato this was still pronounced /tʰ/, and therefore when Greek words were borrowed into Latin theta was transcribed with ⟨th⟩. Since /tʰ/ sounds like /t/ with a following puff of air, ⟨th⟩ was the logical spelling in the Latin alphabet.
By the time of New Testament Greek (koiné), however, the aspirated stop had shifted to a fricative: /tʰ/→/θ/. Thus theta came to have the sound which it still has in Modern Greek, and which it represents in the IPA. From a Latin perspective, the established digraph ⟨th⟩ now represented the voiceless fricative /θ/, and was used thus for English by French-speaking scribes after the Norman Conquest, since they were unfamiliar with the Germanic graphemes ð (eth) and þ (thorn). Likewise, the spelling ⟨th⟩ was used for /θ/ in Old High German prior to the completion of the High German consonant shift, again by analogy with the way Latin represented the Greek sound.
The history of the digraphs ⟨ph⟩ for /f/ and ⟨ch⟩ for Scots, Welsh or German /x/ is parallel.
⟨th⟩ for /t/
Since neither /tʰ/ nor /θ/ was a native sound in Latin, the tendency must have emerged early, and at the latest by medieval Latin, to substitute /t/. Thus in many modern languages, including French and German, the ⟨th⟩ digraph is used in Greek loan-words to represent an original /θ/, but is now pronounced /t/: examples are French théâtre, German Theater. In some cases, this etymological ⟨th⟩, which has no remaining significance for pronunciation, has been transferred to words in which there is no etymological justification for it. For example, German Tal ('valley', cognate with English dale) appears in many place-names with an archaic spelling Thal (contrast Neandertal and Neanderthal). The German family names Theuerkauf and Thürnagel are other examples. The German spelling reform of 1901 largely reversed these, but they remain in some proper nouns.
Examples of this are also to be found in English, perhaps influenced immediately by French. In some Middle English manuscripts, ⟨th⟩ appears for ⟨t⟩ or ⟨d⟩: tho 'to' or 'do', thyll till, whythe white, thede deed. In Modern English we see it in Esther, Thomas, Thames, thyme, Witham (the town in Essex, not the river in Lincolnshire which is pronounced with /ð/) and the old spelling of Satan as Sathan.
In a small number of cases, this spelling later influenced the pronunciation: amaranth, amianthus and author have spelling pronunciations with /θ/, and some English speakers use /θ/ in Neanderthal.
⟨th⟩ for /th/
A few English compound words, such as lightheaded or hothouse, have the letter combination ⟨th⟩ split between the parts, though this is not a digraph. Here, the ⟨t⟩ and ⟨h⟩ are pronounced separately (light-headed) as a cluster of two consonants. Other examples are anthill, goatherd, lighthouse, outhouse, pothead; also in words formed with the suffix -hood: knighthood, and the similarly formed Afrikaans loanword apartheid. In a few place names ending in t+ham the t-h boundary has been lost and become a spelling pronunciation, for example Grantham.
- English pronunciation
- Received Pronunciation
- Spelling pronunciation
- Non-native pronunciations of English
- English orthography
- examples from Collins and Mees p. 103
- In fact, some linguists see 'em as originally a separate word, a remnant of Old English hem, but as the apostrophe shows, it is perceived in modern English as a contraction. See Online Etymology Dictionary. 'em. Retrieved on 18 September 2006.
- Wright (1981:137)
- Wells (1982:329)
- Phonological Features of African American Vernacular English
- The American Heritage Dictionary, 1969.
- Kenyon, John S.; Knott, Thomas A. (1953) . A Pronouncing Dictionary of American English. Springfield, Mass.: Merriam-Webster. p. 87. ISBN 0-87779-047-7.
- Beverley Collins and Inger M. Mees (2003), Practical Phonetics and Phonology, Routledge, ISBN 0-415-26133-3. (2nd edn 2008.)
- Shitara, Yuko (1993). "A survey of American pronunciation preferences." Speech Hearing and Language 7: 201–32.
- Wells, John C. (1982), Accents of English 2: The British Isles, Cambridge: Cambridge University Press, ISBN 0-521-24224-X
- Wright, Peter (1981), Cockney Dialect and Slang, London: B.T. Batsford Ltd. | https://en.wikipedia.org/wiki/Pronunciation_of_English_%E2%9F%A8th%E2%9F%A9 |
4.125 | Strategies, ideas, and instructional guidelines for helping readers develop a deep understanding of the texts that they read
- Grades: PreK–K, 1–2, 3–5, 6–8, 9–12
Presents a lesson for reviewing reading comprehension strategies. First year teachers or new teachers will have students apply those strategies toward composing an oral presentation.
The anchor text for my Cinderella Unit, the 1812 version of Cinderella by Jacob and Wilhelm Grimm, is challenging, but the content is engaging. I have found that students put more effort into reading challenging text if the topics are engaging. Fairy tales, originally meant for adults, intrigue middle school students. This post includes a download of a SMART Board predictogram activity.
A Socratic Seminar allows students to shine while deeply increasing comprehension. Learning about this methodology changed my perspective on teaching and also allowed me to secure a highly successful observation. Several videos, support tools, and a detailed lesson plan are included.
Howard Gardner suggests that intelligence encompasses several different components, one of which is music. I use music in my classroom to manage the day and to tap into the talents of those students who are high on the musical intelligence spectrum. One way to engage these students in reading is to use lyrics to teach the difference between the literal and beyond literal meaning of texts.
Tips and Strategies
Get ideas from teachers and experts on how to deepen reading comprehension in your students.
Help your students truly understand the content that they are reading with these helpful tips and strategies.
Even for upper grade students, Dr. Seuss can help teach the fantastic power of symbolism while reading. Classic books develop a deeper meaning for us as we grow older and gain life experience -- older students can read his books with new eyes. Who would have figured that Yertle the Turtle represents Adolph Hitler? Discover lesson possibilities, book suggestions, photos, and anchor charts in this blog post.
When students read or listen to non-fiction, they must locate details that pertain to the main idea of the selection. Whenever we study a new unit in class, we rely on our prior knowledge and use focus questions as well as text features to help "set the purpose" for what we are preparing to read. This week's entry about locating the main idea and supporting details in a selection will help your students work on the most important skill in reading.
It is crucial that we expose our students to nonfiction texts as often as possible. This month I share resources for teaching nonfiction reading concepts, including posters, links to great Web sites and articles, printables, an exciting new way to make current events interactive, and much more!
Some critics claim that interactive whiteboards (IWBs) are glorified, expensive projectors. I suppose they are, if they are used as a presentation tool and not as a learning tool that requires student interaction. There are effective ways of implementing an IWB into reading and writing without a lot of time or technological skills. | http://www.scholastic.com/teachers/collection/reading-comprehension |
4.15625 | A phosphor, most generally, is a substance that exhibits the phenomenon of luminescence. Somewhat confusingly, this includes both phosphorescent materials, which show a slow decay in brightness (> 1 ms), and fluorescent materials, where the emission decay takes place over tens of nanoseconds. Phosphorescent materials are known for their use in radar screens and glow-in-the-dark toys, whereas fluorescent materials are common in cathode ray tube (CRT) and plasma video display screens, sensors, and white LEDs.
Phosphors are often transition metal compounds or rare earth compounds of various types. The most common uses of phosphors are in CRT displays and fluorescent lights. CRT phosphors were standardized beginning around World War II and designated by the letter "P" followed by a number.
- 1 Principles
- 2 Materials
- 3 Applications
- 4 Standard phosphor types
- 5 See also
- 6 References
- 7 Bibliography
- 8 External links
A material can emit light either through incandescence, where all atoms radiate, or by luminescence, where only a small fraction of atoms, called emission centers or luminescence centers, emit light. In inorganic phosphors, these inhomogeneities in the crystal structure are created usually by addition of a trace amount of dopants, impurities called activators. (In rare cases dislocations or other crystal defects can play the role of the impurity.) The wavelength emitted by the emission center is dependent on the atom itself, and on the surrounding crystal structure.
The scintillation process in inorganic materials is due to the electronic band structure found in the crystals. An incoming particle can excite an electron from the valence band to either the conduction band or the exciton band (located just below the conduction band and separated from the valence band by an energy gap). This leaves an associated hole behind, in the valence band. Impurities create electronic levels in the forbidden gap. The excitons are loosely bound electron-hole pairs that wander through the crystal lattice until they are captured as a whole by impurity centers. The latter then rapidly de-excite by emitting scintillation light (fast component). In case of inorganic scintillators, the activator impurities are typically chosen so that the emitted light is in the visible range or near-UV where photomultipliers are effective. The holes associated with electrons in the conduction band are independent from the latter. Those holes and electrons are captured successively by impurity centers exciting certain metastable states not accessible to the excitons. The delayed de-excitation of those metastable impurity states, slowed down by reliance on the low-probability forbidden mechanism, again results in light emission (slow component).
Many phosphors tend to lose efficiency gradually by several mechanisms. The activators can undergo change of valence (usually oxidation), the crystal lattice degrades, atoms – often the activators – diffuse through the material, the surface undergoes chemical reactions with the environment with consequent loss of efficiency or buildup of a layer absorbing either the exciting or the radiated energy, etc.
The degradation of electroluminescent devices depends on frequency of driving current, the luminance level, and temperature; moisture impairs phosphor lifetime very noticeably as well.
Harder, high-melting, water-insoluble materials display lower tendency to lose luminescence under operation.
- BaMgAl10O17:Eu2+ (BAM), a plasma display phosphor, undergoes oxidation of the dopant during baking. Three mechanisms are involved; absorption of oxygen atoms into oxygen vacancies on the crystal surface, diffusion of Eu(II) along the conductive layer, and electron transfer from Eu(II) to adsorbed oxygen atoms, leading to formation of Eu(III) with corresponding loss of emissivity. Thin coating of aluminium phosphate or lanthanum(III) phosphate is effective in creation a barrier layer blocking access of oxygen to the BAM phosphor, for the cost of reduction of phosphor efficiency. Addition of hydrogen, acting as a reducing agent, to argon in the plasma displays significantly extends the lifetime of BAM:Eu2+ phosphor, by reducing the Eu(III) atoms back to Eu(II).
- Y2O3:Eu phosphors under electron bombardment in presence of oxygen form a non-phosphorescent layer on the surface, where electron-hole pairs recombine nonradiatively via surface states.
- ZnS:Mn, used in AC thin film electroluminescent (ACTFEL) devices degrades mainly due to formation of deep-level traps, by reaction of water molecules with the dopant; the traps act as centers for nonradiative recombination. The traps also damage the crystal lattice. Phosphor aging leads to decreased brightness and elevated threshold voltage.
- ZnS-based phosphors in CRTs and FEDs degrade by surface excitation, coulombic damage, build-up of electric charge, and thermal quenching. Electron-stimulated reactions of the surface are directly correlated to loss of brightness. The electrons dissociate impurities in the environment, the reactive oxygen species then attack the surface and form carbon monoxide and carbon dioxide with traces of carbon, and nonradiative zinc oxide and zinc sulfate on the surface; the reactive hydrogen removes sulfur from the surface as hydrogen sulfide, forming nonradiative layer of metallic zinc. Sulfur can be also removed as sulfur oxides.
- ZnS and CdS phosphors degrade by reduction of the metal ions by captured electrons. The M2+ ions are reduced to M+; two M+ then exchange an electron and become one M2+ and one neutral M atom. The reduced metal can be observed as a visible darkening of the phosphor layer. The darkening (and the brightness loss) is proportional to the phosphor's exposure to electrons, and can be observed on some CRT screens that displayed the same image (e.g. a terminal login screen) for prolonged periods.
- Europium(II)-doped alkaline earth aluminates degrade by formation of color centers.
5:Ce3+ degrades by loss of luminescent Ce3+ ions.
4:Mn (P1) degrades by desorption of oxygen under electron bombardment.
- Oxide phosphors can degrade rapidly in presence of fluoride ions, remaining from incomplete removal of flux from phosphor synthesis.
- Loosely packed phosphors, e.g. when an excess of silica gel (formed from the potassium silicate binder) is present, have tendency to locally overheat due to poor thermal conductivity. E.g. InBO
3:Tb3+ is subject to accelerated degradation at higher temperatures.
Phosphors are usually made from a suitable host material with an added activator. The best known type is a copper-activated zinc sulfide and the silver-activated zinc sulfide (zinc sulfide silver).
The host materials are typically oxides, nitrides and oxynitrides, sulfides, selenides, halides or silicates of zinc, cadmium, manganese, aluminium, silicon, or various rare earth metals. The activators prolong the emission time (afterglow). In turn, other materials (such as nickel) can be used to quench the afterglow and shorten the decay part of the phosphor emission characteristics.
Many phosphor powders are produced in low-temperature processes, such as sol-gel and usually require post-annealing at temperatures of ~1000 °C, which is undesirable for many applications. However, proper optimization of the growth process allows to avoid the annealing.
Phosphors used for fluorescent lamps require a multi-step production process, with details that vary depending on the particular phosphor. Bulk material must be milled to obtain a desired particle size range, since large particles produce a poor quality lamp coating and small particles produce less light and degrade more quickly. During the firing of the phosphor, process conditions must be controlled to prevent oxidation of the phosphor activators or contamination from the process vessels. After milling the phosphor may be washed to remove minor excess of activator elements. Volatile elements must not be allowed to escape during processing. Lamp manufacturers have changed composition of phosphors to eliminate some toxic elements, such as beryllium, cadmium, or thallium, formerly used.
The commonly quoted parameters for phosphors are the wavelength of emission maximum (in nanometers, or alternatively color temperature in kelvins for white blends), the peak width (in nanometers at 50% of intensity), and decay time (in seconds).
Phosphor layers provide most of the light produced by fluorescent lamps, and are also used to improve the balance of light produced by metal halide lamps. Various neon signs use phosphor layers to produce different colors of light. Electroluminescent displays found, for example, in aircraft instrument panels, use a phosphor layer to produce glare-free illumination or as numeric and graphic display devices. White LED lamps consist of a blue or ultra-violet emitter with a phosphor coating that emits at longer wavelengths, giving a full spectrum of visible light.
Phosphor thermometry is a temperature measurement approach that uses the temperature dependence of certain phosphors. For this, a phosphor coating is applied to a surface of interest and, usually, the decay time is the emission parameter that indicates temperature. Because the illumination and detection optics can be situated remotely, the method may be used for moving surfaces such as high speed motor surfaces. Also, phosphor may be applied to the end of an optical fiber as an optical analog of a thermocouple.
- Calcium sulfide with strontium sulfide with bismuth as activator, (Ca,Sr)S:Bi, yields blue light with glow times up to 12 hours, red and orange are modifications of the zinc sulfide formula. Red color can be obtained from strontium sulfide.
- Zinc sulfide with about 5 ppm of a copper activator is the most common phosphor for the glow-in-the-dark toys and items. It is also called GS phosphor.
- Mix of zinc sulfide and cadmium sulfide emit color depending on their ratio; increasing of the CdS content shifts the output color towards longer wavelengths; its persistence ranges between 1–10 hours.
- Strontium aluminate activated by europium, SrAl2O4:Eu(II):Dy(III), is a newer material with higher brightness and significantly longer glow persistence; it produces green and aqua hues, where green gives the highest brightness and aqua the longest glow time. SrAl2O4:Eu:Dy is about 10 times brighter, 10 times longer glowing, and 10 times more expensive than ZnS:Cu. The excitation wavelengths for strontium aluminate range from 200 to 450 nm. The wavelength for its green formulation is 520 nm, its blue-green version emits at 505 nm, and the blue one emits at 490 nm. Colors with longer wavelengths can be obtained from the strontium aluminate as well, though for the price of some loss of brightness.
In these applications, the phosphor is directly added to the plastic used to mold the toys, or mixed with a binder for use as paints.
ZnS:Cu phosphor is used in glow-in-the-dark cosmetic creams frequently used for Halloween make-ups. Generally, the persistence of the phosphor increases as the wavelength increases. See also lightstick for chemiluminescence-based glowing items.
Zinc sulfide phosphors are used with radioactive materials, where the phosphor was excited by the alpha- and beta-decaying isotopes, to create luminescent paint for dials of watches and instruments (radium dials). Between 1913 and 1950 radium-228 and radium-226 were used to activate a phosphor made of silver doped zinc sulfide (ZnS:Ag), which gave a greenish glow. The phosphor is not suitable to be used in layers thicker than 25 mg/cm², as the self-absorption of the light then becomes a problem. Furthermore, zinc sulfide undergoes degradation of its crystal lattice structure, leading to gradual loss of brightness significantly faster than the depletion of radium. ZnS:Ag coated spinthariscope screens were used by Ernest Rutherford in his experiments discovering atomic nucleus.
Electroluminescence can be exploited in light sources. Such sources typically emit from a large area, which makes them suitable for backlights of LCD displays. The excitation of the phosphor is usually achieved by application of high-intensity electric field, usually with suitable frequency. Current electroluminescent light sources tend to degrade with use, resulting in their relatively short operation lifetimes.
ZnS:Cu was the first formulation successfully displaying electroluminescence, tested at 1936 by Georges Destriau in Madame Marie Curie laboratories in Paris.
Indium tin oxide (ITO, also known under trade name IndiGlo) composite is used in some Timex watches, though as the electrode material, not as a phosphor itself. "Lighttape" is another trade name of an electroluminescent material, used in electroluminescent light strips.
White light-emitting diodes are usually blue InGaN LEDs with a coating of a suitable material. Cerium(III)-doped YAG (YAG:Ce3+, or Y3Al5O12:Ce3+) is often used; it absorbs the light from the blue LED and emits in a broad range from greenish to reddish, with most of output in yellow. This yellow emission combined with the remaining blue emission gives the “white” light, which can be adjusted to color temperature as warm (yellowish) or cold (blueish) white. The pale yellow emission of the Ce3+:YAG can be tuned by substituting the cerium with other rare earth elements such as terbium and gadolinium and can even be further adjusted by substituting some or all of the aluminium in the YAG with gallium. However, this process is not one of phosphorescence. The yellow light is produced by a process known as scintillation, the complete absence of an afterglow being one of the characteristics of the process.
Some rare-earth doped Sialons are photoluminescent and can serve as phosphors. Europium(II)-doped β-SiAlON absorbs in ultraviolet and visible light spectrum and emits intense broadband visible emission. Its luminance and color does not change significantly with temperature, due to the temperature-stable crystal structure. It has a great potential as a green down-conversion phosphor for white LEDs; a yellow variant also exists. For white LEDs, a blue LED is used with a yellow phosphor, or with a green and yellow SiAlON phosphor and a red CaAlSiN3-based (CASN) phosphor.
White LEDs can also be made by coating near ultraviolet (NUV) emitting LEDs with a mixture of high efficiency europium based red and blue emitting phosphors plus green emitting copper and aluminium doped zinc sulfide (ZnS:Cu,Al). This is a method analogous to the way fluorescent lamps work.
Significant part of white LEDs used in general lighting systems can be even now used for data transfer, for example, in systems assisting positioning in closed spaces to facilitate people searching necessary rooms or objects.
Cathode ray tubes
Cathode ray tubes produce signal-generated light patterns in a (typically) round or rectangular format. Bulky CRTs were used in the black-and-white household television ("TV") sets that became popular in the 1950s, as well as first-generation, tube-based color TVs, and most earlier computer monitors. CRTs have also been widely used in scientific and engineering instrumentation, such as oscilloscopes, usually with a single phosphor color, typically green.
White (in black-and-white): The mix of zinc cadmium sulfide and zinc sulfide silver, the ZnS:Ag+(Zn,Cd)S:Ag is the white P4 phosphor used in black and white television CRTs.
Red: Yttrium oxide-sulfide activated with europium is used as the red phosphor in color CRTs. The development of color TV took a long time due to the search for a red phosphor. The first red emitting rare earth phosphor, YVO4,Eu3, was introduced by Levine and Palilla as a primary color in television in 1964. In single crystal form, it was used as an excellent polarizer and laser material.
Yellow: When mixed with cadmium sulfide, the resulting zinc cadmium sulfide (Zn,Cd)S:Ag, provides strong yellow light.
Green: Combination of zinc sulfide with copper, the P31 phosphor or ZnS:Cu, provides green light peaking at 531 nm, with long glow.
Blue: Combination of zinc sulfide with few ppm of silver, the ZnS:Ag, when excited by electrons, provides strong blue glow with maximum at 450 nm, with short afterglow with 200 nanosecond duration. It is known as the P22B phosphor. This material, zinc sulfide silver, is still one of the most efficient phosphors in cathode ray tubes. It is used as a blue phosphor in color CRTs.
The phosphors are usually poor electrical conductors. This may lead to deposition of residual charge on the screen, effectively decreasing the energy of the impacting electrons due to electrostatic repulsion (an effect known as "sticking"). To eliminate this, a thin layer of aluminium is deposited over the phosphors and connected to the conductive layer inside the tube. This layer also reflects the phosphor light to the desired direction, and protects the phosphor from ion bombardment resulting from an imperfect vacuum.
To reduce the image degradation by reflection of ambient light, contrast can be increased by several methods. In addition to black masking of unused areas of screen, the phosphor particles in color screens are coated with pigments of matching color. For example, the red phosphors are coated with ferric oxide (replacing earlier Cd(S,Se) due to cadmium toxicity), blue phosphors can be coated with marine blue (CoO·nAl
3) or ultramarine (Na
2). Green phosphors based on ZnS:Cu do not have to be coated due to their own yellowish color.
Standard phosphor types
|P1, GJ||Zn2SiO4:Mn (Willemite)||Green||528 nm||40 nm||1-100ms||CRT, Lamp||Oscilloscopes and monochrome monitors|
|P3||Zn8:BeSi5O19:Mn||Yellow||602 nm||–||Medium/13ms||CRT||Amber monochrome monitors|
|P4||ZnS:Ag+(Zn,Cd)S:Ag||White||565,540 nm||–||Short||CRT||Black and white TV CRTs and display tubes.|
|P4 (Cd-free)||ZnS:Ag+ZnS:Cu+Y2O2S:Eu||White||–||–||Short||CRT||Black and white TV CRTs and display tubes, Cd free.|
|P4, GE||ZnO:Zn||Green||505 nm||–||1–10µs||VFD||sole phosphor in vacuum fluorescent displays.|
|P5||Blue||430 nm||–||Very Short||CRT||Film|
|P7||(Zn,Cd)S:Cu||Blue with Yellow persistence||558,440 nm||–||Long||CRT||Radar PPI, old EKG monitors|
|P10||KCl||green-absorbing scotophor||–||–||Long||Dark-trace CRTs||Radar screens; turns from translucent white to dark magenta, stays changed until erased by heating or infrared light|
|P11, BE||ZnS:Ag,Cl or ZnS:Zn||Blue||460 nm||–||0.01-1 ms||CRT, VFD||Display tubes and VFDs|
|P14||Blue with Orange persistence||–||–||Medium/Long||CRT||Radar PPI, old EKG monitors|
|P15||ZnO:Zn||Blue-Green||504,391 nm||–||Extremely Short||CRT||Television pickup by flying-spot scanning|
|P19, LF||(KF,MgF2):Mn||Orange-Yellow||590 nm||–||Long||CRT||Radar screens|
|P20, KA||(Zn,Cd)S:Ag or (Zn,Cd)S:Cu||Yellow-green||555 nm||–||1–100 ms||CRT||Display tubes|
|P22R||Y2O2S:Eu+Fe2O3||Red||611 nm||–||Short||CRT||Red phosphor for TV screens|
|P22G||ZnS:Cu,Al||Green||530 nm||–||Short||CRT||Green phosphor for TV screens|
|P22B||ZnS:Ag+Co-on-Al2O3||Blue||–||–||Short||CRT||Blue phosphor for TV screens|
|P26, LC||(KF,MgF2):Mn||Orange||595 nm||–||Long||CRT||Radar screens|
|P28, KE||(Zn,Cd)S:Cu,Cl||Yellow||–||–||Medium||CRT||Display tubes|
|P31, GH||ZnS:Cu or ZnS:Cu,Ag||Yellowish-green||–||–||0.01-1 ms||CRT||Oscilloscopes|
|P33, LD||MgF2:Mn||Orange||590 nm||–||> 1sec||CRT||Radar screens|
|P38, LK||(Zn,Mg)F2:Mn||Orange-Yellow||590 nm||–||Long||CRT||Radar screens|
|P39, GR||Zn2SiO4:Mn,As||Green||525 nm||–||Long||CRT||Display tubes|
|P40, GA||ZnS:Ag+(Zn,Cd)S:Cu||White||–||–||Long||CRT||Display tubes|
|P43, GY||Gd2O2S:Tb||Yellow-green||545 nm||–||Medium||CRT||Display tubes, Electronic Portal Imaging Devices (EPIDs) used in radiation therapy linear accelerators for cancer treatment|
|P45, WB||Y2O2S:Tb||White||545 nm||–||Short||CRT||Viewfinders|
|P46, KG||Y3Al5O12:Ce||Green||530 nm||–||Very short||CRT||Beam-index tube|
|P47, BH||Y2SiO5:Ce||Blue||400 nm||–||Very short||CRT||Beam-index tube|
|P53, KJ||Y3Al5O12:Tb||Yellow-green||544 nm||–||Short||CRT||Projection tubes|
|P55, BM||ZnS:Ag,Al||Blue||450 nm||–||Short||CRT||Projection tubes|
|ZnS:Cu,Al or ZnS:Cu,Au,Al||Green||530 nm||–||–||CRT||–|
|Y2SiO5:Tb||Green||545 nm||–||–||CRT||Projection tubes|
|Y2OS:Tb||Green||545 nm||–||–||CRT||Display tubes|
|Y3(Al,Ga)5O12:Ce||Green||520 nm||–||Short||CRT||Beam-index tube|
|Y3(Al,Ga)5O12:Tb||Yellow-green||544 nm||–||Short||CRT||Projection tubes|
|(Ba,Eu)Mg2Al16O27||Blue||–||–||–||Lamp||Trichromatic fluorescent lamps|
|(Ce,Tb)MgAl11O19||Green||546 nm||9 nm||–||Lamp||Trichromatic fluorescent lamps|
|BAM||BaMgAl10O17:Eu,Mn||Blue||450 nm||–||–||Lamp, displays||Trichromatic fluorescent lamps|
|BaMg2Al16O27:Eu(II)||Blue||450 nm||52 nm||–||Lamp||Trichromatic fluorescent lamps|
|BAM||BaMgAl10O17:Eu,Mn||Blue-Green||456 nm,514 nm||–||–||Lamp||–|
|BaMg2Al16O27:Eu(II),Mn(II)||Blue-Green||456 nm, 514 nm||50 nm 50%||–||Lamp|
|Ce0.67Tb0.33MgAl11O19:Ce,Tb||Green||543 nm||–||–||Lamp||Trichromatic fluorescent lamps|
|CaSiO3:Pb,Mn||Orange-Pink||615 nm||83 nm||–||Lamp|
|CaWO4 (Scheelite)||Blue||417 nm||–||–||Lamp||–|
|CaWO4:Pb||Blue||433 nm/466 nm||111 nm||–||Lamp||Wide bandwidth|
|MgWO4||Blue pale||473 nm||118 nm||–||Lamp||Wide bandwidth, deluxe blend component |
|(Sr,Eu,Ba,Ca)5(PO4)3Cl||Blue||–||–||–||Lamp||Trichromatic fluorescent lamps|
|Sr5Cl(PO4)3:Eu(II)||Blue||447 nm||32 nm||–||Lamp||–|
|(Sr,Ca,Ba)10(PO4)6Cl2:Eu||Blue||453 nm||–||–||Lamp||Trichromatic fluorescent lamps|
|Sr2P2O7:Sn(II)||Blue||460 nm||98 nm||–||Lamp||Wide bandwidth, deluxe blend component|
|Sr6P5BO20:Eu||Blue-Green||480 nm||82 nm||–||Lamp||–|
|Ca5F(PO4)3:Sb||Blue||482 nm||117 nm||–||Lamp||Wide bandwidth|
|(Ba,Ti)2P2O7:Ti||Blue-Green||494 nm||143 nm||–||Lamp||Wide bandwidth, deluxe blend component |
|Sr5F(PO4)3:Sb,Mn||Blue-Green||509 nm||127 nm||–||Lamp||Wide bandwidth|
|Sr5F(PO4)3:Sb,Mn||Blue-Green||509 nm||127 nm||–||Lamp||Wide bandwidth|
|LaPO4:Ce,Tb||Green||544 nm||–||–||Lamp||Trichromatic fluorescent lamps|
|(La,Ce,Tb)PO4||Green||–||–||–||Lamp||Trichromatic fluorescent lamps|
|(La,Ce,Tb)PO4:Ce,Tb||Green||546 nm||6 nm||–||Lamp||Trichromatic fluorescent lamps|
|(Ca,Zn,Mg)3(PO4)2:Sn||Orange-Pink||610 nm||146 nm||–||Lamp||Wide bandwidth, blend component|
|(Sr,Mg)3(PO4)2:Sn||Orange-Pinkish White||626 nm||120 nm||–||Fluorescent Lamps||Wide bandwidth, deluxe blend component|
|(Sr,Mg)3(PO4)2:Sn(II)||Orange-Red||630 nm||–||–||Fluorescent Lamps||–|
|Ca5F(PO4)3:Sb,Mn||3800K||–||–||–||Fluorescent Lamps||Lite-white blend|
|Ca5(F,Cl)(PO4)3:Sb,Mn||White-Cold/Warm||–||–||–||Fluorescent Lamps||2600K to 9900K, for very high output lamps|
|(Y,Eu)2O3||Red||–||–||–||Lamp||Trichromatic fluorescent lamps|
|Y2O3:Eu(III)||Red||611 nm||4 nm||–||Lamp||Trichromatic fluorescent lamps|
|Mg4(F)GeO6:Mn||Red||658 nm||17 nm||–||High Pressure Mercury Lamps|||
|YVO4:Eu||Orange-Red||619 nm||–||–||High Pressure Mercury and Metal Halide Lamps||–|
|3.5 MgO · 0.5 MgF2 · GeO2 :Mn||Red||655 nm||–||–||Lamp||3.5 MgO · 0.5 MgF2 · GeO2 :Mn|
|Mg5As2O11:Mn||Red||660 nm||–||–||High Pressure Mercury Lamps, 1960s||–|
|SrAl2O7:Pb||Ultraviolet||313 nm||–||–||Special Fluorescent Lamps for Medical use||Ultraviolet|
|CAM||LaMgAl11O19:Ce||Ultraviolet||340 nm||52 nm||–||Black-light Fluorescent Lamps||Ultraviolet|
|LAP||LaPO4:Ce||Ultraviolet||320 nm||38 nm||–||Medical and scientific U.V. Lamps||Ultraviolet|
|SAC||SrAl12O19:Ce||Ultraviolet||295 nm||34 nm||–||Lamp||Ultraviolet|
|SrAl11Si0.75O19:Ce0.15Mn0.15||Green||515 nm||22 nm||–||Lamp||Monochromatic lamps for copiers|
|BSP||BaSi2O5:Pb||Ultraviolet||350 nm||40 nm||–||Lamp||Ultraviolet|
|SBE||SrB4O7:Eu||Ultraviolet||368 nm||15 nm||–||Lamp||Ultraviolet|
|SMS||Sr2MgSi2O7:Pb||Ultraviolet||365 nm||68 nm||–||Lamp||Ultraviolet|
|MgGa2O4:Mn(II)||Blue-Green||–||–||–||Lamp||Black light displays|
- Gd2O2S:Tb (P43), green (peak at 545 nm), 1.5 ms decay to 10%, low afterglow, high X-ray absorption, for X-ray, neutrons and gamma
- Gd2O2S:Eu, red (627 nm), 850 µs decay, afterglow, high X-ray absorption, for X-ray, neutrons and gamma
- Gd2O2S:Pr, green (513 nm), 7 µs decay, no afterglow, high X-ray absorption, for X-ray, neutrons and gamma
- Gd2O2S:Pr,Ce,F, green (513 nm), 4 µs decay, no afterglow, high X-ray absorption, for X-ray, neutrons and gamma
- Y2O2S:Tb (P45), white (545 nm), 1.5 ms decay, low afterglow, for low-energy X-ray
- Y2O2S:Eu (P22R), red (627 nm), 850 µs decay, afterglow, for low-energy X-ray
- Y2O2S:Pr, white (513 nm), 7 µs decay, no afterglow, for low-energy X-ray
- Zn(0.5)Cd(0.4)S:Ag (HS), green (560 nm), 80 µs decay, afterglow, efficient but low-res X-ray
- Zn(0.4)Cd(0.6)S:Ag (HSr), red (630 nm), 80 µs decay, afterglow, efficient but low-res X-ray
- CdWO4, blue (475 nm), 28 µs decay, no afterglow, intensifying phosphor for X-ray and gamma
- CaWO4, blue (410 nm), 20 µs decay, no afterglow, intensifying phosphor for X-ray
- MgWO4, white (500 nm), 80 µs decay, no afterglow, intensifying phosphor
- Y2SiO5:Ce (P47), blue (400 nm), 120 ns decay, no afterglow, for electrons, suitable for photomultipliers
- YAlO3:Ce (YAP), blue (370 nm), 25 ns decay, no afterglow, for electrons, suitable for photomultipliers
- Y3Al5O12:Ce (YAG), green (550 nm), 70 ns decay, no afterglow, for electrons, suitable for photomultipliers
- Y3(Al,Ga)5O12:Ce (YGG), green (530 nm), 250 ns decay, low afterglow, for electrons, suitable for photomultipliers
- CdS:In, green (525 nm), <1 ns decay, no afterglow, ultrafast, for electrons
- ZnO:Ga, blue (390 nm), <5 ns decay, no afterglow, ultrafast, for electrons
- ZnO:Zn (P15), blue (495 nm), 8 µs decay, no afterglow, for low-energy electrons
- (Zn,Cd)S:Cu,Al (P22G), green (565 nm), 35 µs decay, low afterglow, for electrons
- ZnS:Cu,Al,Au (P22G), green (540 nm), 35 µs decay, low afterglow, for electrons
- ZnCdS:Ag,Cu (P20), green (530 nm), 80 µs decay, low afterglow, for electrons
- ZnS:Ag (P11), blue (455 nm), 80 µs decay, low afterglow, for alpha particles and electrons
- anthracene, blue (447 nm), 32 ns decay, no afterglow, for alpha particles and electrons
- plastic (EJ-212), blue (400 nm), 2.4 ns decay, no afterglow, for alpha particles and electrons
- Zn2SiO4:Mn (P1), green (530 nm), 11 ms decay, low afterglow, for electrons
- ZnS:Cu (GS), green (520 nm), decay in minutes, long afterglow, for X-rays
- NaI:Tl, for X-ray, alpha, and electrons
- CsI:Tl, green (545 nm), 5 µs decay, afterglow, for X-ray, alpha, and electrons
- 6LiF/ZnS:Ag (ND), blue (455 nm), 80 µs decay, for thermal neutrons
- 6LiF/ZnS:Cu,Al,Au (NDg), green (565 nm), 35 µs decay, for neutrons
- Emsley, John (2000). The Shocking History of Phosphorus. London: Macmillan. ISBN 0-330-39005-8..
- Peter W. Hawkes (1 October 1990). Advances in electronics and electron physics. Academic Press. pp. 350–. ISBN 978-0-12-014679-6. Retrieved 9 January 2012.
- Bizarri, G; Moine, B (2005). "On phosphor degradation mechanism: thermal treatment effects". Journal of Luminescence 113 (3–4): 199. Bibcode:2005JLum..113..199B. doi:10.1016/j.jlumin.2004.09.119.
- Lakshmanan, p. 171
- Tanno, Hiroaki; Fukasawa, Takayuki; Zhang, Shuxiu; Shinoda, Tsutae; Kajiyama, Hiroshi (2009). "Lifetime Improvement of BaMgAl10O17:Eu2+Phosphor by Hydrogen Plasma Treatment". Japanese Journal of Applied Physics 48 (9): 092303. Bibcode:2009JaJAP..48i2303T. doi:10.1143/JJAP.48.092303.
- Ntwaeaborwa, O. M.; Hillie, K. T.; Swart, H. C. (2004). "Degradation of Y2O3:Eu phosphor powders". Physica Status Solidi (c) 1 (9): 2366. Bibcode:2004PSSCR...1.2366N. doi:10.1002/pssc.200404813.
- Wang, Ching-Wu; Sheu, Tong-Ji; Su, Yan-Kuin; Yokoyama, Meiso (1997). "Deep Traps and Mechanism of Brightness Degradation in Mn-doped ZnS Thin-Film Electroluminescent Devices Grown by Metal-Organic Chemical Vapor Deposition". Japanese Journal of Applied Physics 36: 2728. Bibcode:1997JaJAP..36.2728W. doi:10.1143/JJAP.36.2728.
- Lakshmanan, pp. 51, 76
- PPT presentation in Polish
- Xie, Rong-Jun; Hirosaki, Naoto (2007). "Silicon-based oxynitride and nitride phosphors for white LEDs—A review" (free pdf). Sci. Technol. Adv. Mater. 8 (7–8): 588. Bibcode:2007STAdM...8..588X. doi:10.1016/j.stam.2007.08.005.
- Li, Hui-Li; Hirosaki, Naoto; Xie, Rong-Jun; Suehiro, Takayuki; Mitomo, Mamoru (2007). "Fine yellow α-SiAlON:Eu phosphors for white LEDs prepared by the gas-reduction–nitridation method" (free pdf). Sci. Techno. Adv. Mater. 8 (7–8): 601. Bibcode:2007STAdM...8..601L. doi:10.1016/j.stam.2007.09.003.
- Raymond Kane, Heinz Sell Revolution in lamps: a chronicle of 50 years of progress (2nd ed.), The Fairmont Press, Inc. 2001 ISBN 0-88173-378-4 . Chapter 5 extensively discusses history, application and manufacturing of phosphors for lamps.
- Youn-Gon Park; et al. "Luminescence and temperature dependency of β-SiAlON phosphor". Samsung Electro Mechanics Co.
- Hideyoshi Kume, Nikkei Electronics (Sep 15, 2009). "Sharp to Employ White LED Using Sialon".
- Hirosaki Naoto; et al. (2005). "New sialon phosphors and white LEDs". Oyo Butsuri 74 (11): 1449.
- M.S. Fudin; et al. (2014). "Frequency characteristics of modern LED phosphor materials". Scientific and Technical Journal of Information Technologies, Mechanics and Optics 14 (6): 71. line feed character in
|journal=at position 33 (help)
- Levine, Albert K.; Palilla, Frank C. (1964). "A new, highly efficient red-emitting cathodoluminescent phosphor (YVO4:Eu) for color television". Applied Physics Letters 5 (6): 118. Bibcode:1964ApPhL...5..118L. doi:10.1063/1.1723611.
- Fields, R. A.; Birnbaum, M.; Fincher, C. L. (1987). "Highly efficient Nd:YVO4 diode-laser end-pumped laser". Applied Physics Letters 51 (23): 1885. Bibcode:1987ApPhL..51.1885F. doi:10.1063/1.98500.
- Shigeo Shionoya (1999). "VI: Phosphors for cathode ray tubes". Phosphor handbook. Boca Raton, Fla.: CRC Press. ISBN 0-8493-7560-6.
- Jankowiak, Patrick. "Cathode Ray Tube Phosphors" (PDF). bunkerofdoom.com. Retrieved 1 May 2012.[unreliable source?]
- "Osram Sylvania fluorescent lamps". Retrieved 2009-06-06.
- Arunachalam Lakshmanan (2008). Luminescence and Display Phosphors: Phenomena and Applications. Nova Publishers. ISBN 1-60456-018-5.
|Look up phosphor in Wiktionary, the free dictionary.|
- a history of electroluminescent displays.
- Fluorescence, Phosphorescence
- CRT Phosphor Characteristics (P numbers)
- Composition of CRT phosphors
- Safe Phosphors
- Silicon-based oxynitride and nitride phosphors for white LEDs—A review
- & – RCA Manual, Fluorescent screens (P1 to P24)
- Inorganic Phosphors Compositions, Preparation and Optical Properties, William M. Yen and Marvin J. Weber | https://en.wikipedia.org/wiki/Phosphor |
4.4375 | Multiplying and Dividing Exponents Teacher Resources
Find Multiplying and Dividing Exponents educational ideas and activities
Showing 1 - 20 of 430 resources
Developing the Concept: Exponents and Powers of Ten
Here is an exponents lesson plan which invites learners to examine visual examples of multiplication and division using powers of 10. They also practice solving problems that their instructors model. If you are new to teaching these...
5th - 7th Math CCSS: Adaptable
What's the Power of a Quotient Rule?
What's the definition of the power of a quotient rule? You have a fraction that has variables and exponents in the numerator and the denominator, and the whole thing is to a power. Oh my! Don't cry! You can do this. Once you see the rule...
6 mins 6th - 12th Math
Miss Integer Finds Her Properties in Order
Access prior knowledge to practice concepts like order of operations and exponents. Your class can play this game as a daily review or as a warm-up activity when needed. They work in groups of four to complete and correct review problems.
4th - 6th Math CCSS: Designed
Extending the Definitions of Exponents, Variation 1
Scientist work with negative integer exponents all the time. Here, participants will learn how to relate negative exponents to time and to generate equivalent numerical expressions. Learners will apply the properties of integer exponents...
7th - 9th Math CCSS: Designed
How Do You Evaluate an Expression with Exponents?
An algebraic expression and a given value for the variable. Use the substitution property of equality to plug in the given value and solve the expression. Be careful and use the order of operations correctly because there is an exponent...
3 mins 7th - 9th Math
Multiplying and Dividing in Scientific Notation - Grade 8
Here is really nice set of resources on scientific notation. Eighth and ninth graders explore the concept of multiplying and dividing in scientific notation. In this multiplying and dividing numbers in scientific notation lesson,...
7th - 9th Math CCSS: Adaptable | http://www.lessonplanet.com/lesson-plans/multiplying-and-dividing-exponents |
4.125 | Last glacial period
The last glacial period, popularly known as the Ice Age, was the most recent glacial period within the Quaternary glaciation occurring during the last 100,000 years of the Pleistocene, from approximately 110,000 to 12,000 years ago. Scientists consider this "ice age" to be merely the latest glaciation event in a much larger ice age, one that dates back over two million years and has seen multiple glaciations.
During this period, there were several changes between glacier advance and retreat. The Last Glacial Maximum, the maximum extent of glaciation within the last glacial period, was approximately 22,000 years ago. While the general pattern of global cooling and glacier advance was similar, local differences in the development of glacier advance and retreat make it difficult to compare the details from continent to continent (see picture of ice core data below for differences).
From the point of view of human archaeology, it falls in the Paleolithic and Mesolithic periods. When the glaciation event started, Homo sapiens were confined to Africa and used tools comparable to those used by Neanderthals in Europe and the Levant and by Homo erectus in Asia. Near the end of the event, Homo sapiens spread into Europe, Asia, and Australia. The retreat of the glaciers allowed groups of Asians to migrate to the Americas and populate them.
- 1 Origin and definition
- 2 Overview
- 3 Named local glaciations
- 3.1 Antarctica glaciation
- 3.2 Europe
- 3.3 North America
- 3.4 South America
- 4 See also
- 5 References
- 6 Further reading
- 7 External links
Origin and definition
The last glacial period is sometimes colloquially referred to as the "last ice age", though this use is incorrect because an ice age is a longer period of cold temperature in which ice sheets cover large parts of the Earth, such as Antarctica. Glacials, on the other hand, refer to colder phases within an ice age that separate interglacials. Thus, the end of the last glacial period is not the end of the last ice age. The end of the last glacial period was about 10,500 BCE, while the end of the last ice age has not yet come.
Over the past few million years the glacial-interglacial cycle has been "paced" by periodic variations in the Earth's orbit via Milankovitch cycles which are thus the "cause" of ice ages.
The last glacial period is the best-known part of the current ice age, and has been intensively studied in North America, northern Eurasia, the Himalaya and other formerly glaciated regions around the world. The glaciations that occurred during this glacial period covered many areas, mainly in the Northern Hemisphere and to a lesser extent in the Southern Hemisphere. They have different names, historically developed and depending on their geographic distributions: Fraser (in the Pacific Cordillera of North America), Pinedale (in the Central Rocky Mountains), Wisconsinan or Wisconsin (in central North America), Devensian (in the British Isles), Midlandian (in Ireland), Würm (in the Alps), Mérida (in Venezuela), Weichselian or Vistulian (in Northern Europe and northern Central Europe), Valdai in Eastern Europe and Zyryanka in Siberia, Llanquihue in Chile, and Otira in New Zealand. The geochronological Late Pleistocene comprises the late glacial (Weichselian) and the immediately preceding penultimate interglacial (Eemian) preiond.
The last glaciation centered on the huge ice sheets of North America and Eurasia. Considerable areas in the Alps, the Himalaya and the Andes were ice-covered, and Antarctica remained glaciated.
Canada was nearly completely covered by ice, as well as the northern part of the United States, both blanketed by the huge Laurentide ice sheet. Alaska remained mostly ice free due to arid climate conditions. Local glaciations existed in the Rocky Mountains and the Cordilleran ice sheet and as ice fields and ice caps in the Sierra Nevada in northern California. In Britain, mainland Europe, and northwestern Asia, the Scandinavian ice sheet once again reached the northern parts of the British Isles, Germany, Poland, and Russia, extending as far east as the Taimyr Peninsula in western Siberia. The maximum extent of western Siberian glaciation was reached approximately 16,000 to 15,000 BCE and thus later than in Europe (20,000–16,000 BCE). Northeastern Siberia was not covered by a continental-scale ice sheet. Instead, large, but restricted, icefield complexes covered mountain ranges within northeast Siberia, including the Kamchatka-Koryak Mountains.
The Arctic Ocean between the huge ice sheets of America and Eurasia was not frozen throughout, but like today probably was only covered by relatively shallow ice, subject to seasonal changes and riddled with icebergs calving from the surrounding ice sheets. According to the sediment composition retrieved from deep-sea cores there must even have been times of seasonally open waters.
Outside the main ice sheets, widespread glaciation occurred on the Alps-Himalaya mountain chain. In contrast to the earlier glacial stages, the Würm glaciation was composed of smaller ice caps and mostly confined to valley glaciers, sending glacial lobes into the Alpine foreland. To the east the Caucasus and the mountains of Turkey and Iran were capped by local ice fields or small ice sheets. In the Himalaya and the Tibetan Plateau, glaciers advanced considerably, particularly between 45,000–25,000 BCE, but these datings are controversial. The formation of a contiguous ice sheet on the Tibetan Plateau is controversial.
Other areas of the Northern Hemisphere did not bear extensive ice sheets, but local glaciers in high areas. Parts of Taiwan, for example, were repeatedly glaciated between 42,250 and 8,680 BCE as well as the Japanese Alps. In both areas maximum glacier advance occurred between 58,000 and 28,000 BCE (starting roughly during the Toba catastrophe). To a still lesser extent glaciers existed in Africa, for example in the High Atlas, the mountains of Morocco, the Mount Atakor massif in southern Algeria, and several mountains in Ethiopia. In the Southern Hemisphere, an ice cap of several hundred square kilometers was present on the east African mountains in the Kilimanjaro Massif, Mount Kenya and the Ruwenzori Mountains, still bearing remnants of glaciers today.
Glaciation of the Southern Hemisphere was less extensive because of current configuration of continents. Ice sheets existed in the Andes (Patagonian Ice Sheet), where six glacier advances between 31,500 and 11,900 BCE in the Chilean Andes have been reported. Antarctica was entirely glaciated, much like today, but the ice sheet left no uncovered area. In mainland Australia only a very small area in the vicinity of Mount Kosciuszko was glaciated, whereas in Tasmania glaciation was more widespread. An ice sheet formed in New Zealand, covering all of the Southern Alps, where at least three glacial advances can be distinguished. Local ice caps existed in Irian Jaya, Indonesia, where in three ice areas remnants of the Pleistocene glaciers are still preserved today.
Named local glaciations
During the last glacial period Antarctica was blanketed by a massive ice sheet, much as it is today. The ice covered all land areas and extended into the ocean onto the middle and outer continental shelf. According to ice modelling, ice over central East Antarctica was generally thinner than today.
Devensian & Midlandian glaciation (Britain and Ireland)
The name Devensian glaciation is used by British geologists and archaeologists and refers to what is often popularly meant by the latest Ice Age. Irish geologists, geographers, and archaeologists refer to the Midlandian glaciation as its effects in Ireland are largely visible in the Irish Midlands. The name Devensian is derived from the Latin Dēvenses, people living by the Dee (Dēva in Latin), a river on the Welsh border near which deposits from the period are particularly well represented.
The effects of this glaciation can be seen in many geological features of England, Wales, Scotland, and Northern Ireland. Its deposits have been found overlying material from the preceding Ipswichian Stage and lying beneath those from the following Flandrian stage of the Holocene.
Alternative names include: Weichsel glaciation or Vistulian glaciation (referring to the Polish river Vistula or its German name Weichsel). Evidence suggests that the ice sheets were at their maximum size for only a short period, between 25,000 to 13,000 BP. Eight interstadials have been recognized in the Weichselian, including: the Oerel, Glinde, Moershoofd, Hengelo and Denekamp; however correlation with isotope stages is still in process. During the glacial maximum in Scandinavia, only the western parts of Jutland were ice-free, and a large part of what is today the North Sea was dry land connecting Jutland with Britain (see Doggerland). It is also in Denmark that the only Scandinavian ice-age animals older than 13,000 BC are found.
The Baltic Sea, with its unique brackish water, is a result of meltwater from the Weichsel glaciation combining with saltwater from the North Sea when the straits between Sweden and Denmark opened. Initially, when the ice began melting about 10,300 BP, seawater filled the isostatically depressed area, a temporary marine incursion that geologists dub the Yoldia Sea. Then, as post-glacial isostatic rebound lifted the region about 9500 BP, the deepest basin of the Baltic became a freshwater lake, in palaeological contexts referred to as Ancylus Lake, which is identifiable in the freshwater fauna found in sediment cores. The lake was filled by glacial runoff, but as worldwide sea level continued rising, saltwater again breached the sill about 8000 BP, forming a marine Littorina Sea which was followed by another freshwater phase before the present brackish marine system was established. "At its present state of development, the marine life of the Baltic Sea is less than about 4000 years old," Drs. Thulin and Andrushaitis remarked when reviewing these sequences in 2003.
Overlying ice had exerted pressure on the Earth's surface. As a result of melting ice, the land has continued to rise yearly in Scandinavia, mostly in northern Sweden and Finland where the land is rising at a rate of as much as 8–9 mm per year, or 1 meter in 100 years. This is important for archaeologists since a site that was coastal in the Nordic Stone Age now is inland and can be dated by its relative distance from the present shore.
Würm glaciation (Alps)
The term Würm is derived from a river in the Alpine foreland, approximately marking the maximum glacier advance of this particular glacial period. The Alps were where the first systematic scientific research on ice ages was conducted by Louis Agassiz at the beginning of the 19th century. Here the Würm glaciation of the last glacial period was intensively studied. Pollen analysis, the statistical analyses of microfossilized plant pollens found in geological deposits, chronicled the dramatic changes in the European environment during the Würm glaciation. During the height of Würm glaciation, c. 24,000–10,000 BP, most of western and central Europe and Eurasia was open steppe-tundra, while the Alps presented solid ice fields and montane glaciers. Scandinavia and much of Britain were under ice.
During the Würm, the Rhône Glacier covered the whole western Swiss plateau, reaching today's regions of Solothurn and Aarau. In the region of Bern it merged with the Aar glacier. The Rhine Glacier is currently the subject of the most detailed studies. Glaciers of the Reuss and the Limmat advanced sometimes as far as the Jura. Montane and piedmont glaciers formed the land by grinding away virtually all traces of the older Günz and Mindel glaciation, by depositing base moraines and terminal moraines of different retraction phases and loess deposits, and by the pro-glacial rivers' shifting and redepositing gravels. Beneath the surface, they had profound and lasting influence on geothermal heat and the patterns of deep groundwater flow.
Pinedale or Fraser glaciation (Rocky Mountains)
The Pinedale (central Rocky Mountains) or Fraser (Cordilleran ice sheet) glaciation was the last of the major glaciations to appear in the Rocky Mountains in the United States. The Pinedale lasted from approximately 30,000 to 10,000 years ago and was at its greatest extent between 23,500 and 21,000 years ago. This glaciation was somewhat distinct from the main Wisconsin glaciation as it was only loosely related to the giant ice sheets and was instead composed of mountain glaciers, merging into the Cordilleran Ice Sheet. The Cordilleran ice sheet produced features such as glacial Lake Missoula, which would break free from its ice dam causing the massive Missoula floods. USGS Geologists estimate that the cycle of flooding and reformation of the lake lasted an average of 55 years and that the floods occurred approximately 40 times over the 2,000 year period between 15,000 and 13,000 years ago. Glacial lake outburst floods such as these are not uncommon today in Iceland and other places.
The Wisconsin Glacial Episode was the last major advance of continental glaciers in the North American Laurentide ice sheet. At the height of glaciation the Bering land bridge potentially permitted migration of mammals, including people, to North America from Siberia.
It radically altered the geography of North America north of the Ohio River. At the height of the Wisconsin Episode glaciation, ice covered most of Canada, the Upper Midwest, and New England, as well as parts of Montana and Washington. On Kelleys Island in Lake Erie or in New York's Central Park, the grooves left by these glaciers can be easily observed. In southwestern Saskatchewan and southeastern Alberta a suture zone between the Laurentide and Cordilleran ice sheets formed the Cypress Hills, which is the northernmost point in North America that remained south of the continental ice sheets.
The Great Lakes are the result of glacial scour and pooling of meltwater at the rim of the receding ice. When the enormous mass of the continental ice sheet retreated, the Great Lakes began gradually moving south due to isostatic rebound of the north shore. Niagara Falls is also a product of the glaciation, as is the course of the Ohio River, which largely supplanted the prior Teays River.
In its retreat, the Wisconsin Episode glaciation left terminal moraines that form Long Island, Block Island, Cape Cod, Nomans Land, Martha's Vineyard, Nantucket, Sable Island and the Oak Ridges Moraine in south central Ontario, Canada. In Wisconsin itself, it left the Kettle Moraine. The drumlins and eskers formed at its melting edge are landmarks of the Lower Connecticut River Valley.
Tahoe, Tenaya, and Tioga, Sierra Nevada
In the Sierra Nevada, there are three named stages of glacial maxima (sometimes incorrectly called ice ages) separated by warmer periods. These glacial maxima are called, from oldest to youngest, Tahoe, Tenaya, and Tioga. The Tahoe reached its maximum extent perhaps about 70,000 years ago. Little is known about the Tenaya. The Tioga was the least severe and last of the Wisconsin Episode. It began about 30,000 years ago, reached its greatest advance 21,000 years ago, and ended about 10,000 years ago.
In Northwest Greenland, ice coverage attained a very early maximum in the last glacial period around 114,000. After this early maximum, the ice coverage was similar to today until the end of the last glacial period. Towards the end, glaciers readvanced once more before retreating to their present extent. According to ice core data, the Greenland climate was dry during the last glacial period, precipitation reaching perhaps only 20% of today's value.
Mérida glaciation (Venezuelan Andes)
The name Mérida Glaciation is proposed to designate the alpine glaciation which affected the central Venezuelan Andes during the Late Pleistocene. Two main moraine levels have been recognized: one between 2600 and 2700 m, and another between 3000 and 3500 m elevation. The snow line during the last glacial advance was lowered approximately 1200 m below the present snow line (3700 m). The glaciated area in the Cordillera de Mérida was approximately 600 km2; this included the following high areas from southwest to northeast: Páramo de Tamá, Páramo Batallón, Páramo Los Conejos, Páramo Piedras Blancas, and Teta de Niquitao. Approximately 200 km2 of the total glaciated area was in the Sierra Nevada de Mérida, and of that amount, the largest concentration, 50 km2, was in the areas of Pico Bolívar, Pico Humboldt (4,942 m), and Pico Bonpland (4,893 m). Radiocarbon dating indicates that the moraines are older than 10,000 years B.P., and probably older than 13,000 years B.P. The lower moraine level probably corresponds to the main Wisconsin glacial advance. The upper level probably represents the last glacial advance (Late Wisconsin).
Llanquihue glaciation (Southern Andes)
The Llanquihue glaciation takes its name from Llanquihue Lake in southern Chile which is a fan-shaped piedmont glacial lake. On the lake's western shores there are large moraine systems of which the innermost belong to the last glacial period. Llanquihue Lake's varves are a node point in southern Chile's varve geochronology. During the last glacial maximum the Patagonian Ice Sheet extended over the Andes from about 35°S to Tierra del Fuego at 55°S. The western part appears to have been very active, with wet basal conditions, while the eastern part was cold based. Cryogenic features like ice wedges, patterned ground, pingos, rock glaciers, palsas, soil cryoturbation, solifluction deposits developed in unglaciated extra-Andean Patagonia during the Last Glaciation. However, not all these reported features have been verified. The area west of Llanquihue Lake was ice-free during the LGM, and had sparsely distributed vegetation dominated by Nothofagus. Valdivian temperate rainforest was reduced to scattered remnants in the western side of the Andes.
- Current sea level rise
- Glacial history of Minnesota
- Glacial lake outburst flood
- Glacial period
- Ice age
- Last Glacial Maximum
- Timeline of glaciation
- Valparaiso Moraine
- Clayton, Lee; Attig, John W.; Mickelson, David M.; Johnson, Mark D.; Syverson, Kent M. "Glaciation of Wisconsin" (PDF). Dept. Geology, University of Wisconsin.
- Crowley, Thomas J. (1995). "Ice age terrestrial carbon changes revisited". Global Biogeochemical Cycles 9 (3): 377–389. doi:10.1029/95GB01107.
- Clark, D.H. Extent, timing, and climatic significance of latest Pleistocene and Holocene glaciation in the Sierra Nevada, California (PDF 20 Mb) (Ph.D.). Seattle: Washington University.
- Möller, P.; et al. (2006). "Severnaya Zemlya, Arctic Russia: a nucleation area for Kara Sea ice sheets during the Middle to Late Quaternary" (PDF 11.5 Mb). Quaternary Science Reviews 25 (21–22): 2894–2936. doi:10.1016/j.quascirev.2006.02.016.
- Matti Saarnisto: Climate variability during the last interglacial-glacial cycle in NW Eurasia. Abstracts of PAGES – PEPIII: Past Climate Variability Through Europe and Africa, 2001
- Gualtieri, Lyn; et al. (May 2003). "Pleistocene raised marine deposits on Wrangel Island, northeast Siberia and implications for the presence of an East Siberian ice sheet". Quaternary Research 59 (3): 399–410. doi:10.1016/S0033-5894(03)00057-7.
- Ehlers & Gibbard 2004 III, pp. 321–323
- Barr, I.D; Clark, C.D. (2011). "Glaciers and Climate in Pacific Far NE Russia during the Last Glacial Maximum". Journal of Quaternary Science 26 (2): 227. doi:10.1002/jqs.1450.
- Spielhagen, Robert F.; et al. (2004). "Arctic Ocean deep-sea record of northern Eurasian ice sheet history". Quaternary Science Reviews 23 (11–13): 1455–83. doi:10.1016/j.quascirev.2003.12.015.
- Williams, Jr., Richard S.; Ferrigno, Jane G. (1991). "Glaciers of the Middle East and Africa – Glaciers of Turkey" (PDF 2.5 Mb). U.S.Geological Survey Professional Paper 1386-G-1.
Ferrigno, Jane G. (1991). "Glaciers of the Middle East and Africa – Glaciers of Iran" (PDF 1.25 Mb). U.S.Geological Survey Professional Paper 1386-G-2.
- Owen, Lewis A.; et al. (2002). "A note on the extent of glaciation throughout the Himalaya during the global Last Glacial Maximum". Quaternary Science Reviews 21 (1): 147–157. doi:10.1016/S0277-3791(01)00104-4.
- Kuhle, M., Kuhle, S. (2010): Review on Dating methods: Numerical Dating in the Quaternary of High Asia. In: Journal of Mountain Science (2010) 7: 105-122.
- Chevalier, Marie-Luce; et al. (2011). "Constraints on the late Quaternary glaciations in Tibet from cosmogenic exposure ages of moraine surfaces". Quaternary Science Reviews 30: 528–554. doi:10.1016/j.quascirev.2010.11.005. line feed character in
|title=at position 81 (help)
- Kuhle, Matthias (2002). "A relief-specific model of the ice age on the basis of uplift-controlled glacier areas in Tibet and the corresponding albedo increase as well as their positive climatological feedback by means of the global radiation geometry". Climate Research 20: 1–7. doi:10.3354/cr020001.
- Ehlers & Gibbard 2004 III, Kuhle, M. "The High Glacial (Last Ice Age and LGM) ice cover in High and Central Asia". Quaternary Glaciations - Extent and Chronology. pp. 175–199. ISBN 9780444534477.
- Lehmkuhl, F. (2003). "Die eiszeitliche Vergletscherung Hochasiens – lokale Vergletscherungen oder übergeordneter Eisschild?". Geographische Rundschau 55 (2): 28–33.
- Zhijiu Cui; et al. (2002). "The Quaternary glaciation of Shesan Mountain in Taiwan and glacial classification in monsoon areas". Quaternary International. 97–98: 147–153. doi:10.1016/S1040-6182(02)00060-5.
- Yugo Ono; et al. (September–October 2005). "Mountain glaciation in Japan and Taiwan at the global Last Glacial Maximum". Quaternary International. 138–139: 79–92. doi:10.1016/j.quaint.2005.02.007.
- Young, James A.T.; Hastenrath, Stefan (1991). "Glaciers of the Middle East and Africa – Glaciers of Africa" (PDF 1.25 Mb). U.S. Geological Survey Professional Paper 1386-G-3.
- Lowell, T.V.; et al. (1995). "Interhemisperic correlation of late Pleistocene glacial events" (PDF 2.3 Mb). Science 269 (5230): 1541–9. doi:10.1126/science.269.5230.1541. PMID 17789444.
- Ollier, C.D. "Australian Landforms and their History". National Mapping Fab. Geoscience Australia.
- Burrows, C. J.; Moar, N. T. (1996). "A mid Otira Glaciation palaeosol and flora from the Castle Hill Basin, Canterbury, New Zealand" (PDF 340 Kb). New Zealand Journal of Botany 34 (4): 539–545. doi:10.1080/0028825X.1996.10410134.
- Allison, Ian; Peterson, James A. (1988). Glaciers of Irian Jaya, Indonesia: Observation and Mapping of the Glaciers Shown on Landsat Images. ISBN 0-607-71457-3. U.S. Geological Survey professional paper 1386.
- Anderson, J. B.; Shipp, S. S.; Lowe, A. L.; Wellner, J. S.; Mosola, A. B. (2002). "The Antarctic Ice Sheet during the Last Glacial Maximum and its subsequent retreat history: a review". Quaternary Science Reviews 21 (1–3): 49–70. doi:10.1016/S0277-3791(01)00083-X.
- Ehlers & Gibbard 2004 III, Ingolfsson, O. Quaternary glacial and climate history of Antarctica (PDF). pp. 3–43.
- Huybrechts, P. (2002). "Sea-level changes at the LGM from ice-dynamic reconstructions of the Greenland and Antarctic ice sheets during the glacial cycles". Quaternary Science Reviews 21 (1–3): 203–231. doi:10.1016/S0277-3791(01)00082-8.
- Behre, Karl-Ernst and van der Plicht, Johannes (1992) "Towards an absolute chronology for the last glacial period in Europe: radiocarbon dates from Oerel, northern Germany" Vegetation History and Archaeobotany 1(2): pp. 111–117 doi: 10.1007/BF00206091
- Davis, Owen K. (2003) "Non-Marine Records: Correlatiuons withe the Marine Sequence" Introduction to Quaternary Ecology University of Arizona web site, doi: 2003618-145735g
- "Brief geologic history". Rocky Mountain National Park.
- "Ice Age Floods". U.S. National Park Service.
- Waitt, Jr., Richard B. (October 1985). "Case for periodic, colossal jökulhlaups from Pleistocene glacial Lake Missoula". Geological Society of America Bulletin 96 (10): 1271–86. doi:10.1130/0016-7606(1985)96<1271:CFPCJF>2.0.CO;2.
- Ehlers & Gibbard 2004 II, p. 57
- Funder, Svend"Late Quaternary stratigraphy and glaciology in the Thule area, Northwest Greenland". MoG Geoscience 22: 63. 1990.
- Johnsen, Sigfus J.; et al. (1992). "A "deep" ice core from East Greenland". MoG Geoscience 29: 22.
- Schubert, Carlos (1998). "Glaciers of Venezuela". US Geological Survey (USGS P 1386-I).
- Schubert, C.; Valastro, S. (1974). "Late Pleistocene glaciation of Páramo de La Culata, north-central Venezuelan Andes" (PDF). Geologische Rundschau 63 (2): 516–538. doi:10.1007/BF01820827.
- Mahaney, William C.; Milner, M.W., Kalm, Volli; Dirsowzky, Randy W.; Hancock, R.G.V.; Beukens, Roelf P. (1 April 2008). "Evidence for a Younger Dryas glacial advance in the Andes of northwestern Venezuela". Geomorphology 96 (1–2): 199–211. doi:10.1016/j.geomorph.2007.08.002.
- Maximiliano, B.; Orlando, G.; Juan, C.; Ciro, S. "Glacial Quaternary geology of las Gonzales basin, páramo los conejos, Venezuelan andes".
- Trombotto Liaudat, Darío (2008). "Geocryology of Southern South America". In Rabassa, J. The Late Cenozoic of Patagonia and Tierra del Fuego. pp. 255–268. ISBN 978-0-444-52954-1.
- Adams, Jonathan. "South America during the last 150,000 years".
- Bowen, D.Q. (1978). Quaternary geology: a stratigraphic framework for multidisciplinary work. Oxford UK: Pergamon Press. ISBN 978-0-08-020409-3.
- Ehlers, J.; Gibbard, P.L., eds. (2004). Quaternary Glaciations: Extent and Chronology 2: Part II North America. Amsterdam: Elsevier. ISBN 0-444-51462-7.
- Ehlers, J.; Gibbard, P.L., eds. (2004). Quaternary Glaciations: Extent and Chronology 3: Part III: South America, Asia, Africa, Australia, Antarctica. Amsterdam: Elsevier. ISBN 0-444-51593-3.
- Gillespie, A.R., Porter, S.C.; Atwater, B.F. (2004). The Quaternary Period in the United States [of America]. Developments in Quaternary Science 1. Amsterdam: Elsevier. ISBN 978-0-444-51471-4.
- Harris, A.G.; Tuttle, E.; Tuttle, S.D. (1997). Geology of National Parks (5th ed.). Iowa: Kendall/Hunt. ISBN 0-7872-5353-7.
- Kuhle, M. (1988). "The Pleistocene Glaciation of Tibet and the Onset of Ice Ages — An Autocycle HypothesisGeoJournal". GeoJournal 17 (4): 581–596. doi:10.1007/BF00209444.
- Mangerud, J.; Ehlers, J.; Gibbard, P., ed. (2004). Quaternary Glaciations : Extent and Chronology 1: Part I Europe. Amsterdam: Elsevier. ISBN 0-444-51462-7.
- Sibrava, V.; Bowen, D.Q; Richmond, G.M. (1986). "Quaternary Glaciations in the Northern Hemisphere". Quaternary Science Reviews 5: 1–514. doi:10.1016/S0277-3791(86)80002-6.
- Pielou, E.C. (1991). After the Ice Age : The Return of Life to Glaciated North America. Chicago IL: University Of Chicago Press. ISBN 0-226-66812-6.
|Wikimedia Commons has media related to Ice age.|
- Pielou, E. C. After the Ice Age: The Return of Life to Glaciated North America (University of Chicago Press: 1992)
- National Atlas of the USA: Wisconsin Glaciation in North America: Present state of knowledge
- Ray, N.; Adams, J.M. (2001). "A GIS-based Vegetation Map of the World at the Last Glacial Maximum (25,000–15,000 BP)" (PDF). Internet Archaeology 11. | https://en.wikipedia.org/wiki/Last_glacial_period |
4.40625 | 2 Answers | Add Yours
To analyze narrative perspective you look for and identify the perspective from which the story is being told and the omniscience or limitedness of information known and conveyed. There are two possible perspectives from which to tell a story: from without the story and from withing the story. There several degrees of knowledge conveyed: only personal knowledge, knowledge of one or more characters, knowledge of all the characters. Let's elaborate on these.
If a story is told from a perspective that is without (outside of) the story, the narratorial voice is not a character in the story. The narratorial voice can be thought of as the voice of an oral story teller: someone who recounts a story that is devoid of their own personal involvement. If a story is told from within (inside of) the story, the narratorial voice is a character in the story. The narratorial voice can be thought of as belonging to a character who has a share of the action and conflict and resolution that comprises the story. This may be a central character and is often the main character or it may be a minor character who is a participant and observer--or maybe even just an observer.
When the story is told from a narratorial perspective without the story, the narrator may be fully omniscient and know the thoughts, feelings, motives, and emotions of every character and thus be able to reveal anything any character thinks or feels etc. On the other hand, this external type of narrator may be limited in perspective with knowledge of only one or a few of the characters thoughts, feelings etc. Other characters would be reported on based only on their words and actions and visible attitudes--things readily observable to the narrator.
When the story is told from a narratorial perspective from within the story, the narrator is limited to what they themselves feel or think or desire. In other words, the only thoughts, feelings, emotions, or motives they know are their own. They also know what they can observe of other character's actions, words, or visible attitudes. They also can know and report what other characters confide to the them of their own inner feelings, thoughts, or motives.
So to analyze the narratorial perspective, you look for the location within or without of the narrator and you identify the level of knowledge present. Then you can label the perspective as third person (without the story and using he, she, and it) with limited knowledge, which is called limited third person, or as third person with omniscient knowledge, which is called omniscient third person. Or you can label it as first person (within the story and using I, me, my, mine, we, us, etc as well as he and she etc) with limited knowledge, which is called first person.
The narrative perspective determines by whom the story is actually told; most common are
a first person narrator, which means the narrator is also a character in the story who gives his or her view on what is happening. As a consequence, you don't always know how other characters think or feel.
a third person narrator, which means every character is referred to as 'he' or 'she' or 'they' The narrator is not a character in the story. Because of this, the narrator can give all the information he/she wishes to give.
To analyse the perspective you simply look how the story is told. If it is a first person narrator, you try to find out who this person is and whether you think this character is reliable or not.
Do you need these questions answered for a particular book or story?
We’ve answered 300,966 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/how-do-analyse-narrative-perspective-whats-271616 |
4.1875 | Tubes of Ice Hold Record of Climate In Past and Future
Published: July 20, 1993
(Page 2 of 2)
As the great ice sheet began melting some 17,000 years ago, the largest accumulation of water was Lake Agassiz, far larger than Lake Superior and covering much of south-central Canada. As long as ice blocked its drainage east, into the St. Lawrence Valley, Lake Agassiz overflowed into the Mississippi.
But at critical times the ice retreated far enough for the lake to flood eastward, reaching the North Atlantic instead of the Gulf of Mexico. Dr. James Kennett at the University of California in Santa Barbara, said such changes were evident in sediment extracted from the Gulf, as well as in the canyons carved by the sudden eastward outpourings of Lake Agassiz.
This explanation would not, however, apply to the sudden climate changes that, according to the cores, occurred between the last two ice ages.
Flooding of the North Atlantic with fresh water, according to Dr. Wallace Broecker of the Lamont-Doherty Earth Observatory, could interrupt the circulation that brings the Gulf Stream north. The most extreme cooling during the Younger Dryas occurred near the North Atlantic, but is also seen in the Antarctic ice and even, it is reported, in the sediment of the Santa Barbara Channel off California, making it a global event.
Another explanation for the sudden temperature changes seen in the Greenland ice cores is large-scale slippages, or "surges," of continental ice into the sea. When some glaciers reach a critical stage, their flow increases many times. It has been proposed that as the bottom of an ice sheet is warmed by heat from the earth's interior, it becomes slushy, allowing the ice to slip. Cores extracted from sediment under the eastern Atlantic have revealed at least five layers of Canadian pebbles, showing that at certain times, many thousands of years apart, North America shed armies of icebergs that almost reached Europe.
The American drilling reached bottom this month and is still being analyzed. The European drilling was completed last year and, as reported last Thursday in the journal Nature, the full length of the core has been analyzed, showing temperature history for the past 250,000 years.
Ice from an even earlier period, larded with silt and pebbles, has been extracted, but the Europeans expressed concern that layering near the bottom might have been disturbed by motion of the ice over the bedrock. An airborne Danish radar capable of penetrating the ice had shown the rock under both the European and American sites to be relatively flat. Nevertheless, as pointed out by Dr. Mayewski, some movement of the deepest ice seems to have occurred.
It is hoped that Russian drilling into the Antarctic ice at Vostok will provide a far longer record, reaching 500,000 years into the past. The ice at Vostok is much thicker and was formed where annual snowfall is minimal. In the drilling, now suspended for the southern winter, ice 160,000 years old has been reached, but thousands of feet remain to be penetrated.
The ice samples now in hand may be able to answer many mysteries, including disputes about volcanic eruptions. Microscopic fragments of glass from a specific eruption can be identified and can now be dated by counting annual layers in the ice. A long debated question has been the date of the giant volcanic explosion that wiped out the Minoan city of Thera in the Aegean Sea and may have provided the basis for the Atlantis legend described in Plato's dialogues. Dating of Eruption
That event has now been dated by the European drillers at 1645 B.C., with an error margin of seven years. Ash from the eruption has been found deep in the sediment of the eastern Mediterranean and in the Nile delta, leading to speculation that the event could be the basis for the biblical plagues of Egypt. A month ago, American and French scientists reported finding it at four sites in the Black Sea.
Its origin in the Thera explosion can be verified by analysis of the chemical and optical properties of the volcanic glass. Dr. Gregory A. Zielinski of the center here, who is analyzing the Greenland cores, said in a telephone interview that because the ice from that period also contained ash from a great Alaskan eruption, he could not be sure of the Thera layer before studying the glass now in hand.
Among other great volcanic eruptions tentatively identified in the ice cores is one whose glass shards have also been found at the South Pole. The fact that this material spread to both polar regions may indicate that the volcano was near the Equator. Similarity of the shards to those from a recent eruption of El Chichon in Mexico has led Dr. Zielinski and his colleagues to propose that the source may have been that volcano.
Photos: Michael C. Morrison, associate director of the Greenland Ice Sheet Project 2, examining a core sample of ice in a storage van containing samples dating back tens of thousands of years. The van is in a parking lot at the University of New Hampshire in Durham, N.H. This bar represents about 20 years of snowfall. (Tad ackman) | http://www.nytimes.com/1993/07/20/science/tubes-of-ice-hold-record-of-climate-in-past-and-future.html?pagewanted=2&src=pm |
4 | Definition of Coronavirus
Coronavirus: One of a group of RNA viruses, so named because they look like a corona or halo when viewed under the electron microscope. The corona or halo is due to an array of surface projections on the viral envelope.
The coronavirus genome is a single strand of RNA 32 kilobases long and is the largest known RNA virus genome. Coronaviruses are also unusual in that they have the highest known frequency of recombination of any positive-strand RNA virus, promiscuously combining genetic information from different sources.
Coronaviruses are ubiquitous. They are the second leading cause of the common cold (after the rhinoviruses). Members of the coronavirus family cause major illnesses among animals, including hepatitis (inflammation of the liver) in mice and gastroenteritis (inflammation of the digestive system) in pigs, and respiratory infections (in birds).
Soon after the start of the outbreak of SARS (severe acute respiratory syndrome) in 2002-2003, a coronavirus came under suspicion as one of the leading suspects. A new coronavirus was, in fact, discovered to be the agent responsible for SARS.
The first coronavirus was isolated in 1937. It was the avian infectious bronchitis virus, which can cause devastating disease in chicken flocks. Since then, related coronaviruses have been found to infect cattle, pigs, horses, turkeys, cats, dogs, rats, and mice. The first human coronavirus was cultured in the 1960s from nasal cavities of people with the common cold. Two human coronaviruses, OC43 and 229E, cause about 30% of common colds. The SARS coronavirus is different and distinct from them and from all other known coronaviruses.
Coronaviruses are very unusual viruses. They have a genome of over 30,000 nucleotides and so are gigantic, as viruses go. They are also unusual in how they replicate themselves. Coronaviruses have a two-step replication mechanism. (Many RNA virus genomes contain a single, large gene that is translated by the cellular machinery of the host to produce all viral proteins.) Coronaviruses can contain up to 10 separate genes. Most ribosomes translate the biggest one of these genes, called replicase, which by itself is twice the size of many other RNA viral genomes. The replicase gene produces a series of enzymes that use the rest of the genome as a template to produce a set of smaller, overlapping messenger RNA molecules, which are then translated into the so-called structural proteins -- the building blocks of new viral particles.
Last Editorial Review: 6/14/2012
Back to MedTerms online medical dictionary A-Z List
Need help identifying pills and medications?
- Allergic Skin Disorders
- Bacterial Skin Diseases
- Bites and Infestations
- Diseases of Pigment
- Fungal Skin Diseases
- Medical Anatomy and Illustrations
- Noncancerous, Precancerous & Cancerous Tumors
- Oral Health Conditions
- Papules, Scales, Plaques and Eruptions
- Scalp, Hair and Nails
- Sexually Transmitted Diseases (STDs)
- Vascular, Lymphatic and Systemic Conditions
- Viral Skin Diseases
- Additional Skin Conditions | http://www.medicinenet.com/script/main/art.asp?articlekey=22789 |
4.125 | Math and Literature, Grades 6-8
From Quack and Count to Harry Potter, the imaginative ideas in children’s books come to life in math lessons through this unique series. Each resource provides more than 20 classroom-tested lessons that engage children in mathematical problem solving and reasoning. Each lesson features an overview, materials required, and a vignette of how the lesson actually unfolded in a classroom. This book includes a reference chart indicating the mathematical concept each lesson covers, such as number, geometry, patterns, algebra, measurement, data analysis, or probability.
What people are saying - Write a review
A Drop of Water
The Greedy Triangle
Harry Potter and the Sorcerers Stone
How Big Is a Foot?
How Much Is a Million?
One Inch Tall
Spaghetti and Meatballs for All
Tikki Tikki Tembo
Whats Faster Than a Speeding Cheetah?
The Kings Giraffe
angles answered Antarctica ants apprentice’s feet Ask students asked the students bacterium Blackline Masters bottle box-and-whisker plot bubble calculators candy corn cards centimeter-squared paper Children’s Fiction circle graph circumference column Cuisenaire rods cups dents desk determine diameter discuss Dragon Blood Earthshine estimates explained figure find finding finished first fit five floor fluid ounces four fraction gallon graphing calculators groups of three Hagrid half hexagon inch tall Introducing the Investigation king’s feet king’s foot labeled length lesson look Marilyn Burns Math and Literature measure median meters miles per hour multiply Nonfiction number of letters number of scoops number of sides number of triangles pair patterns perimeter Phantom Tollbooth polygon problem radius record relationship responded scatterplot Shadowchild shape Shel Silverstein shoulder span South Georgia Island square strategies string ruler student see Blackline tables tape Tell-Tale Heart toast What’s yardsticks Yeah | https://books.google.com/books?id=yuwdL9eukPYC&dq=related:ISBN0821835009&source=gbs_similarbooks_r&hl=en |
4.03125 | Edict of Nantes, French Édit De Nantes , law promulgated at Nantes in Brittany on April 13, 1598, by Henry IV of France, which granted a large measure of religious liberty to his Protestant subjects, the Huguenots. The edict was accompanied by Henry IV’s own conversion from Huguenot Calvinism to Roman Catholicism and brought an end to the violent Wars of Religion that began in 1562. The controversial edict was one of the first decrees of religious tolerance in Europe and granted unheard-of religious rights to the French Protestant minority.
The edict upheld Protestants in freedom of conscience and permitted them to hold public worship in many parts of the kingdom, though not in Paris. It granted them full civil rights, including access to education, and established a special court, the Chambre de l’Édit, composed of both Protestants and Catholics, to deal with disputes arising from the edict. Protestant pastors were to be paid by the state and released from certain obligations. Militarily, the Protestants could keep the places they were still holding in August 1597 as strongholds, or places de sûreté, for eight years, the expenses of garrisoning them being met by the king.
The edict also restored Catholicism in all areas where Catholic practice had been interrupted and made any extension of Protestant worship in France legally impossible. Nevertheless, it was much resented by Pope Clement VIII, by the Roman Catholic clergy in France, and by the parlements. Catholics tended to interpret the edict in its most restrictive sense. The Cardinal de Richelieu, who regarded its political and military clauses as a danger to the state, annulled them by the Peace of Alès in 1629. On October 18, 1685, Louis XIV formally revoked the Edict of Nantes and deprived the French Protestants of all religious and civil liberties. Within a few years, more than 400,000 persecuted Huguenots emigrated—to England, Prussia, Holland, and America—depriving France of its most industrious commercial class. | http://www.britannica.com/event/Edict-of-Nantes |
4.1875 | August 18, 2011
Fossil Sheds Light On Evolution Of Whales’ Mouths
Scientists have identified a critical step in the evolution of filter-feeding whales' enormous mouths.
These whales, otherwise known as baleen whales or mysticetes, have feeding adaptations that are unique among mammals, in that they can filter small marine creatures from huge volumes of water. The whales accomplish this by using their "loose" lower jaw joints, which enable them to produce a vast filter-feeding gape.A new study of this ancient jawbone has overturned a long-held belief about how baleen whales evolved, finding that nature's largest mouths likely evolved to suck in large prey rather than to engulf plankton-filled water.
The researchers from Australia and the United States found that the fossilized prehistoric jaw differed greatly from the mouths of today's baleen whales.
In modern whales, the lower jaw does not unite at the "chin", but instead consists of a specialized jaw joint that allows each side to rotate. By having two curved lower jawbones that rotate in this manner, modern baleen whales are able to create vast gapes to take in large quantities of water and prey.
The study provides "compelling evidence that these archaic baleen whales could not expand and rotate their lower jaws, which enables living baleen whales to engulf and expel huge volumes of seawater when filter feeding on krill and other tiny animals," lead researcher Dr. Erich Fitzgerald from the Museum Victoria in Melbourne, Australia, told BBC News.
However, it is important to note that the fossilized whale, dubbed Janjucetus hunderi, did have a wide upper jaw, something Dr. Fitzgerald said was the earliest step in the evolution of modern whales' enormous mouths.
Dr. Fitzgerald charted the anatomical features of whales on an "evolutionary tree" - from Janjucetus hunderi to today's blue whale.
"I was able to discover the sequence of jaw evolution from the earliest whales to the modern giants of the sea," he told BBC News reporter Victoria Gill.
The chart showed that "the first step towards the huge mouths of baleen whales may have been increasing the width of the upper jaw [to] suck fish and squid into the mouth one-at-a-time."
"The loose lower jaw joint that enables living baleen whales to greatly expand their mouths when filter feeding evolved later."
This particular whale was so primitive that it had "ordinary" teeth, and had not yet evolved its comb-like baleen.
The fossilized jawbone analyzed in the study was discovered in the 1970s in a coastal town in Victoria, Australia.
"I first saw [it] while visiting a private collection in 2008," said Dr. Fitzgerald.
"I immediately recognized the characteristic shape of the lower jaws of a whale."
Researcher Jeremy Goldbogen from the Cascadia Research Collective in Washington, an expert in the feeding strategies of modern whales, described bulk filter feeding as "one of the most fascinating adaptations in the animal kingdom".
"An important point to note is that bulk filter feeding using [rotating jawbones] does not necessarily mean that suction is not used," he told BBC's Gill.
"A prime example of this are grey whales which are notorious suction filter feeders," he noted.
Dr. Fitzgerald described the whales' mouths as an elegant example of an exaptation, in which a feature evolved to serve a particular function but was later co-opted into a new role.
He believes that its wide jaw helped Janjucetus to suck in large singe prey items, such as squid or fish, and didn't evolve for filter-feeding at all.
"Charles Darwin reflected upon this in The Origin of Species. He wondered how you could go from a whale that has big teeth like Janjucetus does and catching fish and squid one at a time, to something like a modern Blue Whale that feeds en masse," he said in a press release.
"This is the kind of fossil paleontologists dream of finding because it shows a transitional form."
"It's an exciting discovery, but actually not as surprising as you might think," he concluded.
"Evolution by natural selection implies that we should expect to find these kinds of fossils in the rocks."
The findings were published Wednesday in the journal Biology Letters.
Image 1: Illustration of the biggest mouth in history at work. The Blue Whale can expand its mouth to gulp huge volumes of krill-filled water. Credit: Carl Buell/Museum Victoria
Image 2: The fossilised jaws of Janjucetus, clearly showing the immobile symphysis at the tip. Credit: Jon Augier/Museum Victoria
On the Net: | http://www.redorbit.com/news/science/2097590/fossil_sheds_light_on_evolution_of_whales_mouths/ |
4.09375 | Helping Children and Adolescents Cope with Violence and Disasters: What Community Members Can Do
Each year, children experience violence and disaster and face other traumas. Young people are injured, they see others harmed by violence, they suffer sexual abuse, and they lose loved ones or witness other tragic and shocking events. Community members—teachers, religious leaders, and other adults—can help children overcome these experiences and start the process of recovery.
What is trauma?
“Trauma” is often thought of as physical injuries. Psychological trauma is an emotionally painful, shocking, stressful, and sometimes life-threatening experience. It may or may not involve physical injuries, and can result from witnessing distressing events. Examples include a natural disaster, physical or sexual abuse, and terrorism.
Disasters such as hurricanes, earthquakes, and floods can claim lives, destroy homes or whole communities, and cause serious physical and psychological injuries. Trauma can also be caused by acts of violence. The September 11, 2001 terrorist attack is one example. Mass shootings in schools or communities and physical or sexual assault are other examples. Traumatic events threaten people’s sense of safety.
Reactions (responses) to trauma can be immediate or delayed. Reactions to trauma differ in severity and cover a wide range of behaviors and responses. Children with existing mental health problems, past traumatic experiences, and/or limited family and social supports may be more reactive to trauma. Frequently experienced responses among children after trauma are loss of trust and a fear of the event happening again.
It’s important to remember:
- Children’s reactions to trauma are strongly influenced by adults’ responses to trauma.
- People from different cultures may have their own ways of reacting to trauma.
Commonly experienced responses to trauma among children:
Children age 5 and under may react in a number of ways including:
- Showing signs of fear
- Clinging to parent or caregiver
- Crying or screaming
- Whimpering or trembling
- Moving aimlessly
- Becoming immobile
- Returning to behaviors common to being younger
- Being afraid of the dark.
Children age 6 to 11 may react by:
- Isolating themselves
- Becoming quiet around friends, family, and teachers
- Having nightmares or other sleep problems
- Refusing to go to bed
- Becoming irritable or disruptive
- Having outbursts of anger
- Starting fights
- Being unable to concentrate
- Refusing to go to school
- Complaining of physical problems
- Developing unfounded fears
- Becoming depressed
- Expressing guilt over what happened
- Feeling numb emotionally
- Doing poorly with school and homework
- Loss of interest in fun activities.
Adolescents age 12 to 17 may react by:
- Having flashbacks to the event (flashbacks are the mind reliving the event)
- Having nightmares or other sleep problems
- Avoiding reminders of the event
- Using or abusing drugs, alcohol, or tobacco
- Being disruptive, disrespectful, or behaving destructively
- Having physical complaints
- Feeling isolated or confused
- Being depressed
- Being angry
- Loss of interest in fun activities
- Having suicidal thoughts.
Adolescents may feel guilty. They may feel guilt for not preventing injury or deaths. They also may have thoughts of revenge.
What can community members do following a traumatic event?
Community members play important roles by helping children who experience violence or disaster. They help children cope with trauma and protect them from further trauma exposure.
It is important to remember:
- Children should be allowed to express their feelings and discuss the event, but not be forced.
- Community members should identify and address their own feelings; this may allow them to help others more effectively.
- Community members can also use their buildings and institutions as gathering places to promote support.
- Community members can help people identify resources and emphasize community strengths and resources that sustain hope.
Community members need to be sensitive to:
- Difficult behavior
- Strong emotions
- Different cultural responses.
Community members can help in finding mental health professionals to:
- Counsel children
- Help them see that fears are normal
- Offer play therapy
- Offer art therapy
- Help children develop coping skills, problem-solving skills, and ways to deal with fear.
Finally, community members can hold parent meetings to discuss the event, their child’s response, how help is being given to their child, how parents can help their child, and other available support.
How can adults help children and adolescents who experienced trauma?
Helping children can start immediately, even at the scene of the event. Most children recover within a few weeks of a traumatic experience, while some may need help longer. Grief, a deep emotional response to loss, may take months to resolve. Children may experience grief over the loss of a loved one, teacher, friend, or pet. Grief may be re-experienced or worsened by news reports or the event’s anniversary.
Some children may need help from a mental health professional. Some people may seek other kinds of help from community leaders. Identify children who need support and help them obtain it.
Examples of problematic behaviors could be:
- Refusal to go places that remind them of the event
- Emotional numbness
- Dangerous behavior
- Unexplained anger/rage
- Sleep problems including nightmares.
Adult helpers should:
Pay attention to children
- Listen to them
- Accept/do not argue about their feelings
- Help them cope with the reality of their experiences.
Reduce effects of other stressors, such as
- Frequent moving or changes in place of residence
- Long periods away from family and friends
- Pressures to perform well in school
- Transportation problems
- Fighting within the family
- Being hungry.
- It takes time
- Do not ignore severe reactions
- Pay attention to sudden changes in behaviors, speech, language use, or in strong emotions.
Remind children that adults
- Love them
- Support them
- Will be with them when possible.
Help for all people in the first days and weeks
There are steps adults can take following a disaster that can help them cope, making it easier to provide better care for children. These include creating safe conditions, remaining calm and friendly, and connecting with others. Being sensitive to people under stress and respecting their decisions is important.
When possible, help people:
- Get food
- Get a safe place to live
- Get help from a doctor or nurse if hurt
- Contact loved ones or friends
- Keep children with parents or relatives
- Understand what happened
- Understand what is being done
- Know where to get help
- Force people to tell their stories
- Probe for personal details
- Say things like “everything will be OK,” or “at least you survived”
- Say what you think people should feel or how people should have acted
- Say people suffered because they deserved it
- Be negative about available help
- Make promises that you can’t keep such as “you will go home soon.”
More about trauma and stress
Some children will have prolonged mental health problems after a traumatic event. These may include grief, depression, anxiety, and post-traumatic stress disorder (PTSD). Some trauma survivors get better with some support. Others may need prolonged care by a mental health professional. If after a month in a safe environment, children are not able to perform their normal routines or new behavioral or emotional problems develop, then contact a health professional.
Factors influencing how one may respond to trauma include:
- Being directly involved in the trauma, especially as a victim
- Severe and/or prolonged exposure to the event
- Personal history of prior trauma
- Family or personal history of mental illness and severe behavioral problems
- Limited social support; lack of caring family and friends
- On-going life stressors such as moving to a new home, or new school, divorce, job change, or financial troubles.
Some symptoms may require immediate attention. Contact a mental health professional if these symptoms occur:
- Racing heart and sweating
- Being easily startled
- Being emotionally numb
- Being very sad or depressed
- Thoughts or actions to end one’s life.
Access to disaster help and resources:
Centers for Disease Control and Prevention
Federal Emergency Management Agency
National Center for PTSD
The National Child Traumatic Stress Network
Substance Abuse and Mental Health Services Administration
Disaster Distress Helpline
Uniformed Services University of the Health Sciences
Center for the Study of Traumatic Stress
U.S. Department of Justice
Office for Victims of Crime
If you or someone you know is in crisis or thinking of suicide, get help quickly.
- Call your doctor.
- Call 911 for emergency services or go to the nearest emergency room.
- Call the toll-free 24-hour hotline of the National Suicide Prevention Lifeline at 1-800-273-TALK (1-800-273-8255); TTY: 1-800-799-4TTY (4889).
Where can I find more information?
To learn more about trauma among children, visit:
For information on clinical trials, visit:
For more information on conditions that affect mental health, resources, and research, go to MentalHealth.gov at http://www.mentalhealth.gov , the NIMH website at http://www.nimh.nih.gov, or contact us at:
National Institute of Mental Health
Office of Science Policy, Planning, and Communications
Science Writing, Press, and Dissemination Branch
6001 Executive Boulevard
Room 6200, MSC 9663
Bethesda, MD 20892–9663
Phone: 301-443-4513 or 1-866-615-NIMH (6464) toll-free
TTY: 301-443-8431 or1-866-415-8051 toll-free
This publication is in the public domain and may be reproduced or copied without permission from NIMH. We encourage you to reproduce it and use it in your efforts to improve public health. Citation of the National Institute of Mental Health as a source is appreciated. However, using government materials inappropriately can raise legal or ethical concerns, so we ask you to use these guidelines:
- NIMH does not endorse or recommend any commercial products, processes, or services, and our publications may not be used for advertising or endorsement purposes.
- NIMH does not provide specific medical advice or treatment recommendations or referrals; our materials may not be used in a manner that has the appearance of such information.
- NIMH requests that non-Federal organizations not alter our publications in ways that will jeopardize the integrity and “brand” when using the publication.
- Addition of non-Federal Government logos and website links may not have the appearance of NIMH endorsement of any specific commercial products or services or medical treatments or services.
If you have questions regarding these guidelines and use of NIMH publications, please contact the NIMH Information Resource Center at 1-866-615-6464 or e-mail at [email protected].
U.S. Department of Health and Human Services
National Institutes of Health
National Institute of Mental Health
NIH Publication No. 14–3519
NIH…Turning Discovery Into Health | http://www.nimh.nih.gov/health/publications/helping-children-and-adolescents-cope-with-violence-and-disasters-community-members/index.shtml |
4.09375 | Four hundred years ago this week, a previously unseen star suddenly appeared in the night sky. Discovered on Oct. 9, 1604, it was brighter than all other stars.
The German astronomer Johannes Kepler studied the star for a year, and wrote a book about it titled "De Stella Nova" ("The New Star"). In the 1940s scientists realized the object was an exploded star, and they called it Kepler's supernova.
No supernova in our galaxy has been discovered since the 1604 event.
Now the combined efforts of three powerful space observatories have produced a colorful picture of an expanding cloud of gas and dust that is a remnant of the supernova. The image is expected to help astronomers understand these violent and enigmatic events.
The scene is about 13,000 light-years away.
Last week, NASA announced three bursts of energy in faraway galaxies that might signal stars about to explode. It is how the most massive stars end their lives, and the result is often the formation of a black hole.
Spotting such supernovas in advance would be a boon to astronomers, who do not fully understand the death throes of a dying star. Supernovas create all the elements of the universe -- the stuff of planets, plants and people. The stages of the explosions, modeled on computers, have been described as resembling a lava lamp.
Meanwhile, instead of observing what actually happens, scientists are left to study the remnants of Kepler's supernova and similar leftovers of relatively nearby explosions.
In the new picture, released today, a bubble-shaped shroud of gas and dust 14 light-years wide surrounds the exploded star. The bubble is expanding at 4 million mph (2,000 kilometers per second), astronomers said. It slams into interstellar material, setting up shock waves that agitate molecules and create light of various wavelengths.
The image combines data from the Chandra X-ray Observatory, the infrared Spitzer Space Telescope, and visible light collected by the Hubble Space Telescope.
The infrared and X-ray data -- invisible to the eye -- have been colorized to make the image useful to astronomers.
"Multiwavelength studies are absolutely essential for putting together a complete picture of how supernova remnants evolve," said Ravi Sankrit of Johns Hopkins University.
Visible light is shown as yellow, revealing where the supernova shock wave is slamming into the densest regions of surrounding gas. Bright knots are thick clumps of material caused by instabilities that form behind the shock wave, researchers say. Thin filaments show where the shock wave passes through interstellar material that is more uniformly distributed and of lower density.
Infrared data, in red, shows microscopic dust particles that have been heated by the shock wave. Blue areas are X-rays that come from very hot gas or extremely high-energy particles squeezed into action. Green represents lower-energy X-rays from cooler gas.
"When the analysis is complete, we will be able to answer several important questions about this enigmatic object," said William Blair, also of Johns Hopkins and co-leader of the study with Sankrit.
Kepler's supernova remnant is just one of several under study. One thing is clear: Material that a dying star sends into space takes on a variety of dramatic shapes. And interestingly, our own solar system is thought to reside in a huge cavity, riddled with pockets and tunnels all carved out by exploded stars, long ago.
Here are some questions and answers related to Kepler's supernova, provided by the Space Telescope Science Institute, which operates Hubble for NASA:
How often does a star explode as a supernova?
In a typical galaxy like our Milky Way, a supernova pops off about every 100 years. From our earthly vantagepoint, we cannot see every supernova that occurs in our galaxy because interstellar dust obscures our sight.
The Kepler supernova, which occurred 400 years ago, is the last supernova seen inside the disk of our Milky Way. So, statistically, we are overdue for witnessing another stellar blast. Curiously, the Kepler supernova was seen to explode 30 years after Tycho Brahe witnessed a stellar explosion in our galaxy. The nearest recent supernova seen was 1987A, which astronomers spied in 1987 in our galactic neighbor, the Large Magellanic Cloud.
Why are supernovas important?
All stars make heavy chemical elements like carbon and oxygen through a process called nuclear fusion, where lighter elements are fused together to make heavier elements. Many chemical elements heavier than iron, such as gold and uranium, are produced in the heat and pressure of supernova explosions. These heavy elements enrich the interstellar medium, providing the building blocks for stars and planets, like Earth.
What kind of star produces a supernova?
Two types of stars generate supernovas. The first type, called a type Ia supernova is produced by a star's burned-out core. This stellar relic, called a white dwarf, siphons hydrogen from a companion star, thereby making it 1.4 times more massive than our Sun [called the Chandrasekhar limit]. This excess bulk leads to explosive burning of carbon and other chemical elements that make up the white dwarf.
A star that is more than eight times as massive as our Sun generates the second type, called type II. When the star runs out of nuclear fuel, the core collapses. Then the surrounding layers crash onto the core and bounce back, ripping apart the outer layers.
The supernova was first seen in 1604. Is that when the star exploded?
No, the explosion occurred thousands of years ago, but the light of the explosion only reached Earth in 1604. Why did it take so long for the light to reach us? It has to do with distance. The supernova is about 13,000 light-years away. A light-year is the distance that light can travel in a year -- about 6 trillion miles (10 trillion kilometers).
Because the supernova is 13,000 light-years away, it took 13,000 years for light from the exploded star to reach Earth.
- Hubble's Story and the Future of Telescopes | http://www.space.com/412-supernova-400-year-explosion-imaged.html |
4.21875 | Current limiting is the practice in electrical or electronic circuits of imposing an upper limit on the current that may be delivered to a load with the purpose of protecting the circuit generating or transmitting the current from harmful effects due to a short-circuit or similar problem in the load.
The simplest form of current limiting for mains is a fuse. As the current exceeds the fuse's limits it blows thereby disconnecting the load from the source. This method is most commonly used for protecting the household mains. A circuit breaker is another device for mains current limiting.
Compared to circuit breakers, fuses attain faster current limitation by means of arc quenching. Since fuses are passive elements, they are inherently secure. Their drawback is that once blown, they need to be replaced.
Inrush current limiting
An inrush current limiter is a device or group of devices used to limit inrush current. Negative temperature coefficient (NTC) thermistors and resistors are two of the simplest options, with cool-down time and power dissipation being their main drawbacks, respectively. More complex solutions can be used when design constraints make simpler options unfeasible.
In electronic power circuits
||This article's remainder may require cleanup to meet Wikipedia's quality standards. (February 2011)|
Electronic circuits like regulated DC power supplies and power amplifiers employ, in addition to fuses, active current limiting since a fuse alone may not be able to protect the internal devices of the circuit in an over-current or short-circuit situation. A fuse generally is too slow in operation and the time it takes to blow may well be enough to destroy the devices.
A typical short-circuit/overload protection scheme is shown in the image. The schematic is representative of a simple protection mechanism employed in regulated DC supplies and class-AB power amplifiers‡.
Q1 is the pass or output transistor. Rsens is the load current sensing device. Q2 is the protection transistor which turns on as soon as the voltage across Rsens becomes about 0.65 V. This voltage is determined by the value of Rsens and the load current through it (Iload).
When Q2 turns on, it removes base current from Q1 thereby reducing the collector current of Q1. Neglecting the base currents of Q1 and Q2, the collector current of Q1 is also the load current. Thus, Rsens fixes the maximum current to a value given by 0.65/Rsens, for any given output voltage and load resistance.
For example, if Rsens = 0.33 Ω, the current is limited to about 2 A even if Rload becomes a short (and Vo becomes zero). With the absence of Q2, Q1 would attempt to drive a very large current (limited only by Rsens, and dependent on the output voltage Vo if Rload is not zero) and the result would be greater power dissipation in Q1.
If Rload is zero the dissipation will be much greater (enough to destroy Q1). With Q2 in place, the current is limited and the maximum power dissipation in Q1 is also limited to a safe value (though this is also dependent on Vcc, Rload and current-limited Vo).
Further, this power dissipation will remain as long as the overload exists, which means that the devices must be capable of withstanding it for a substantial period. For example, the pass-transistor in a regulated DC power supply system (corresponding to Q1 in the schematic above) rated for 25 V at 1.5 A (with limiting at 2 A) will normally (i.e. with rated load of 1.5 A) dissipate about 7.5 W for a Vcc of 30 V‡‡ (1).
With current limiting, the dissipation will increase to about 60 W if the output is shorted‡‡ (2). Without current limiting the dissipation would be greater than 300 W‡‡ (3) - so limiting does have a benefit, but it turns out that the pass-transistor must now be capable of dissipating at least 60 W.
In short, an 80-100 W device will be needed (for an expected overload and limiting) where a 10-20 W device (with no chance of shorted load) would have been sufficient. In this technique, beyond the current limit the output voltage will decrease to a value depending on the current limit and load resistance.
‡ – For class-AB stages, the circuit will be mirrored vertically and complementary devices will be used for Q1 & Q2.
‡‡ – The following conditions are considered for determining the power dissipation in Q1, with Vo = 25 V, Iload = 1.5 A (limit at 2 A), Rsens = 0.33 Ω (for limiting at 2A) and Vcc = 30 V —
- Normal operation: Vo = 25 V at a load current of 1 A. So Q1 dissipates a power of (30 - 25) V * 1.5 A = 7.5 W. The transistor used must be a 10-20 W device to account for ambient temperature (i.e., derated) and must be mounted on a heat-sink.
- Output shorted, with limiting at 2A: The dissipation is given by (30 - 0.65) V * 2 A = 58.7 W. The 0.65 V is the drop across Rsens. In practice, if the power supply Vcc is not able to provide the maximum short-circuit current it will collapse thereby reducing dissipation in Q1. However this is dependent on how "stiff" the supply is. A stiffer supply will sustain the voltage for a heavier current draw before collapsing. Further, the transistor used must be a 80-100 W device to account for ambient temperature (i.e., derated) and must be mounted on a heat-sink.
- Output shorted, and no limiting: A shorted load will mean that only Rsens is present as the load. With this, the circuit will attempt to put 25 V across Rsens (0.33 Ω) - here the output voltage has to be measured at the emitter of Q1 since Q1 is connected as an emitter-follower and the lower end of Rsens is effectively grounded due to the short. Thus the load current (and collector current of Q1) becomes nearly 76 A, and the dissipation in Q1 becomes (30 - 25) V * 76 A = 380 W. This is a very large power to dissipate, since in normal circumstances Q1 will only be required to dissipate about 7.5 W (60 W at worst with limiting), and even a 100 W transistor will not withstand a 380 W dissipation. Without Rsens (i.e., Q1 emitter is directly connected to the load) the situation is even worse — Q1 becomes a dead short across 30 V and will draw current limited only by its internal resistance. In practice, the dissipation will be less because the supply (Vcc) will collapse under such a condition. However the dissipation will still be enough to destroy Q1.
Single power-supply circuits
An issue with the previous circuit is that Q1 will not be saturated unless its base is biased about 0.5 volts above Vcc.
The circuits at right and left operate more efficiently from a single (Vcc) supply.
In both circuits, R1 allows Q1 to turn on and pass voltage and current to the load. When the current through R_sense exceeds the design limit, Q2 begins to turn on, which in turn begins to turn off Q1, thus limiting the load current. The optional component R2 protects Q2 in the event of a short-circuited load. When Vcc is at least a few volts, a MOSFET can be used for Q1 for lower dropout-voltage. Due to its simplicity, this circuit is sometimes used as a current source for high-power LEDs.
Slew rate control
Many electronics designers put a small resistor on IC output pins. This slows the edge rate which improves electromagnetic compatibility. Some devices have this "slew rate limiting" output resistor built in; some devices have programmable slew rate limiting. This provides overall slew rate control. | https://en.wikipedia.org/wiki/Current_limiting |
4.125 | What is sepsis?
Sepsis is a serious medical condition that can result in organ damage or death. It happens when the body’s immune system has a severe response to an infection. Sepsis is a medical emergency. It needs to be treated right away.
Bacteria, viruses, and fungi can invade your body and cause disease. When your body senses one of these, the immune system responds. Your body releases certain chemicals into the blood that can help fight infection.
In some cases, the body has an abnormal and severe response to infection. This can cause inflammation around the body and damage your body’s cells. Blood clots may start to form all over the body. Some blood vessels may start to leak. Blood flow and blood pressure may start to drop. This harms the body’s organs by stopping oxygen and nutrients from reaching them. If this process isn’t stopped, organs in the body can stop working. This can lead to death.
Sepsis can be called different things according to how severe it is. Systemic inflammatory response syndrome (SIRS) is the mildest form. Sepsis, severe sepsis, and septic shock are more severe forms.
Sepsis is a common cause of death in hospital intensive care units. It can affect people of all ages, but children and older adults are at highest risk.
What causes sepsis?
Sepsis never happens on its own. It always starts with an infection somewhere in your body, such as:
- Lung infection
- Urinary tract infection
- Skin infection
- Abdominal infection (like from appendicitis)
Bacteria often cause these infections. Viruses, parasites, and fungi can also cause them and lead to sepsis. In some cases, the bacteria enter the body through a medical device such as a blood vessel catheter. An infection that spreads around the body through the bloodstream is more likely to cause sepsis. An infection in just one part of the body is less likely to lead to sepsis.
Sepsis is sometimes called blood poisoning, but this is misleading. Sepsis isn’t caused by poison.
Who is at risk for sepsis?
Some health problems that impair your ability to fight infection can raise your risk for sepsis, such as:
- Liver disease
- Severe burns
- Conditions that affect the immune system
Careful treatment of these health conditions may help reduce the risk of sepsis.
What are the symptoms of sepsis?
Symptoms and signs of sepsis can include:
- Fever or abnormally low temperature
- Trouble breathing
- Rapid heart rate and breathing rate
- Low blood pressure
- Signs of reduced blood flow to one or more organs
- Less urine
The symptoms may vary depending on the severity of the sepsis. These symptoms may be mild at first and then quickly get worse.
How is sepsis diagnosed?
To diagnose sepsis, a doctor will ask about your medical history and your symptoms. He or she will do a physical exam. Some of the symptoms of early sepsis are the same as other medical conditions. This can make sepsis hard to diagnose in its early stages. An exam of the heart, lungs, and abdomen are needed to help diagnose sepsis.
You may also have tests, such as:
- Urine tests to look for signs of infection in your urine, and check kidney function
- Blood tests to looks for signs of infection in your blood
- Imaging tests such as a chest X-ray, computed tomography (CT) scan, or other tests to look for the site of infection
A doctor will often diagnose SIRS in a person with certain signs. These include an abnormal body temperature, rapid heart and breathing rate, and abnormal white count but no known source of infection. A doctor can make an official diagnosis of sepsis when these symptoms are present and there is a clear source of infection. These problems plus low blood pressure or low blood flow to one or more organs is severe sepsis. And septic shock is when severe sepsis continues even with very active treatment.
How is sepsis treated?
Treatment is often done in a hospital’s intensive care unit (ICU). This is because sepsis needs very active care. Vital signs such as heart rate will be constantly watched. Blood and urine tests will be done often. Your condition will be watched and your treatment adjusted as often as needed.
The source of the sepsis must be treated. To do this, your doctor will likely use medications. The first treatment may be an antibiotic that works on many types of bacteria. When the exact type of bacteria is known, a different medication may be given. Pockets of infection may need to be drained. These are called abscesses. In some cases, an infected part of the body may need to be removed with surgery.
A person with sepsis will also need other types of treatments to help support the body, such as:
- Extra oxygen, to keep up normal oxygen levels
- Intravenous fluids, to help bring blood pressure and blood flow to organs back to normal
- A breathing tube and a ventilator, if the person has trouble breathing
- Dialysis, in case of kidney failure
- Medications to raise the blood pressure
- Other treatments to prevent problems such as deep vein thrombosis and pressure ulcers
Most people with mild sepsis do recover. But even with intense treatment, some people die from sepsis. Up to half of all people with severe sepsis will die from it.
What are the possible complications of sepsis?
Many people survive sepsis without any lasting problems. Other people may have serious problems from sepsis, such as organ damage. Some of possible complications of sepsis include:
- Kidney failure
- Tissue death (gangrene) of fingers or toes that may require amputation
- Permanent lung damage from acute respiratory distress syndrome
- Permanent brain damage, which can cause memory problems or more severe symptoms
- Later impairment of your immune system, which can increase the risk of future infections
- Damage to the heart valves (endocarditis) which can lead to heart failure
When should I call the doctor?
Call or see a doctor right away if you or someone else has symptoms of sepsis. Early diagnosis and treatment can help improve the chances of a good recovery.
Sepsis is a serious medical condition that can result in organ damage or death. It happens when the body’s immune system has a severe response to an infection.
- Sepsis is a medical emergency. It needs to be treated right away.
- Possible signs and symptoms of sepsis include fever, confusion, trouble breathing, rapid heart rate, and very low blood pressure.
- The infection that caused sepsis will be treated first. Health care providers will also treat the symptoms of sepsis with medications, fluids, and breathing support.
- Sepsis can cause serious complications. These include kidney failure, gangrene, and death.
Tips to help you get the most from a visit to your health care provider:
- Before your visit, write down questions you want answered.
- Bring someone with you to help you ask questions and remember what your provider tells you.
- At the visit, write down the names of new medicines, treatments, or tests, and any new instructions your provider gives you.
- If you have a follow-up appointment, write down the date, time, and purpose for that visit.
- Know how you can contact your provider if you have questions.
Online Medical Reviewer:
Finke, Amy, RN, BSN
Online Medical Reviewer:
newMentor board-certified, academically affiliated clinician
Date Last Reviewed:
© 2000-2015 The StayWell Company, LLC. 780 Township Line Road, Yardley, PA 19067. All rights reserved. This information is not intended as a substitute for professional medical care. Always follow your healthcare professional's instructions. | http://healthlibrary.brighamandwomens.org/Library/DiseasesConditions/Pediatric/90,P02410 |
4.125 | This release is available in German.
The use of uranium as a nuclear fuel and in weapons increases the risk that people may come into contact with it, and the storage of radioactive uranium waste poses an additional environmental risk. However, radioactivity is not the only problem related to contact with uranium; the toxicity of this metal is generally more dangerous to human health. Researchers are still looking for simple, effective methods for the sensitive detection and effective treatment of uranium poisoning. Researchers led by Chuan He at the University of Chicago and Argonne National Laboratory (USA) have now developed a protein that binds to uranium selectively and tightly. As reported in the journal Angewandte Chemie, it is based on a bacterial nickel-binding protein.
In oxygen-containing, aqueous environments, uranium normally exists in the form of the uranyl cation (UO22+), a linear molecule made of one uranium atom and two terminal oxygen atoms. The uranyl ion also likes to form coordination complexes. It prefers to surround itself with up to six ligands arranged in a plane around the ion's "equator". The research team thus chose to develop a protein that offers the uranyl ion a binding cavity in which it is surrounded by the protein's side-groups in the manner it prefers.
As a template, the scientists used the protein NikR (nickel-responsive repressor) from E. coli, a regulator that reacts to nickel ions. When NikR is loaded with nickel ions, it binds to a special DNA sequence. This represses transcription of the neighboring genes, which code for proteins involved in nickel uptake. If no nickel is present in the bacteria, NikR does not bind to the DNA.
The nickel ion is located in a binding cavity in which it is surrounded by a square-planar arrangement of binding groups. By using several mutation steps, the researchers generated a new protein that can bind uranium instead of nickel. Only three amino acids had to be changed. In the specially designed cavity, the uranyl group has six binding partners that surround it equatorially. In addition, there are spaces for the two terminal oxygen atoms of uranyl.
This NikR mutant only binds to DNA in the presence of uranyl, not in the presence of nickel or other metal ions. This confirms its selectivity for uranyl and may make it useful for the detection of uranyl and nuclear waste bioremediation. It also represents the first step towards developing potential protein- or peptide-based agents for treatment of uranium poisoning.
|Contact: Chuan He| | http://www.bio-medicine.org/biology-news-1/A-pocketful-of-uranium-7056-1/ |
4.09375 | Contributor: C. Peter Chen
The last of the major conferences during WW2 was held at Potsdam, code named Terminal. Immediately west of Berlin, President Truman was given a chance to tour the ravaged German capital while he waited for Stalin's arrival (the Russian leader was a day late). The meeting was held at the undamaged Cecilienhof Palace. Stalin's late arrival gave Truman's scientists one extra day to work on the Manhattan Project, and that one extra day seemed to be just enough for Oppenheimer's team to give Truman the resulted he wanted: On the same day that the leaders met at Potsdam, a successful atomic detonation was achieved at New Mexico's desert of Alamogordo under the code name Operation Trinity. By this point, the Americans had learned that Japan wished to end the war, partly by Japan's unrealistic pleas for Moscow to mediate a peace settlement between Japan and the Allied powers. However, the Americans also understood that, if war could not be stopped, many in Japan were prepared to fight to the bitter end, and the losses on both side would be tremendous should landings on the home islands become necessary. Understanding this about Japan, at Potsdam Truman made sure that Stalin would hold true to his promise that Russia would declare war on Japan three months after the surrender of Germany despite the news of the successful test atomic explosion; Truman was keeping his options open.
On 26 July, agreements were reached:
- Reversion of all German annexations in Europe after 1937 and separation of Austria from Germany.
- Statement of aims of the occupation of Germany by the Allies: demilitarization, denazification, democratization and decartelization.
- The Potsdam Agreement, which called for the division of Germany and Austria into four occupation zones (agreed on earlier at the Yalta Conference), and the similar division of Berlin and Vienna into four zones.
- Agreement on prosecution of Nazi war criminals.
- The establishment of the Oder-Neisse line as the provisional border between Germany and Poland.
- The expulsion of the German populations remaining outside the borders of Germany.
- Agreement on war reparations. The Allies estimated their losses and damages at 200 billion dollars. On insistence of the West, Germany was obliged to pay off only 20 billion in German property, current industry products, and work force (However, the Cold War prevented the full payment).
The Potsdam Declaration was also written (by Truman and Churchill, with input from Chiang Kaishek) and was broadcasted to the Japanese people by radio and dropped in pamphlets, both in the Japanese language. It promised "prompt and utter destruction" unless Japan forever renounced militarism, gave up the war criminals, return all conquered territories since 1895, and surrendered unconditionally.
Prime Minister Admiral Suzuki, upon hearing the declaration, was purposefully ambigious in his response while the cabinet debated Suzuki was buying time for himself before writing up his official response to Truman, Churchill, and Chiang. However, on the American side, this delay was completely misinterpreted as Japan's arrogance in continuing the war by ignoring the declaration. Historian Dan van der Vat commented: "Seldom can a misconstrued adverbial nuance have had such devastating consequences".
Source: the Pacific Campaign.
Potsdam Conference Interactive Map
Potsdam Conference Timeline
|17 Jul 1945||At the Potsdam Conference in Germany, top Allied leadership set up a Control Council to administer occupied Germany.|
|18 Jul 1945||In Germany, the second plenary session of the Potsdam Conference was conducted.|
|20 Jul 1945||At Potsdam, Germany, Harry Truman declared that the Allies would demand no territory upon victory.|
|26 Jul 1945||The Potsdam Ultimatum was issued, threatening Japan with "utter destruction" if it did not surrender unconditionally.|
Visitor Submitted Comments
All visitor submitted comments are opinions of those making the submissions and do not reflect views of WW2DB.
» Alexander, Harold
» Arnold, Henry
» Attlee, Clement
» Bevin, Ernest
» Byrnes, James
» Churchill, Winston
» King, Ernest
» Leahy, William
» Marshall, George
» Molotov, Vyacheslav
» Mountbatten, Louis
» Stalin, Joseph
» Stimson, Henry
» Truman, Harry
» Directive from US Joint Chiefs of Staff to Eisenhower Regarding the Military Occupation of Germany
» Potsdam Conference
- » 902 biographies
- » 318 events
- » 32,461 timeline entries
- » 710 ships
- » 311 aircraft models
- » 177 vehicle models
- » 316 weapon models
- » 90 historical documents
- » 115 facilities
- » 403 book reviews
- » 22,700 photos
- » 270 maps | http://ww2db.com/battle_spec.php?battle_id=81 |
4.40625 | El dia de los muertos Teacher Resources
Find El dia de los muertos educational ideas and activities
Showing 1 - 20 of 131 resources
Dia de los Muertos Educator Resource Guide
What are the origins of el Dia de los Muertos, and how is this tradition observed in contemporary celebrations? With a variety of lesson plans and suggested hands-on activities, here is an excellent resource to reference as you prepare...
4th - 7th Social Studies & History CCSS: Adaptable
Dia de los Muertos Sugar Skulls
Students research information about the Day of the Dead (Dia de los Muertos), a major celebration in Mexican culture, and compare it to similar holidays in other cultures. They discover various folk arts and festive traditions associated...
1st - 12th Social Studies & History
Dramatic Day of the Dead Designs
Young scholars research customs and activities associated with the Mexican celebration of Dia de los Muertos (Day of the Dead). Students then analyze their favorite aspects of the holiday and represent them in drawings with bilingual...
1st - 6th Social Studies & History
What is el Dia de los Muertos?
Students explore the Mexican celebration el Dia de los Muertos. In this Mexican celebration lesson, students discuss ways people in the US honor the dead. Students compare and contrast Mexican holidays and American holidays. Students...
4th - 6th Social Studies & History
Dia de los Muertos: Celebrating and Remembering
Help scholars understand the history, geography, traditions, and art of Dia de los Muertos, the Day of the Dead. Find background information for your reference as well as a detailed cross-curricular lesson plan. Learners compare...
K - 2nd Social Studies & History
Claycrete Calaveras - Dia de los Muertos
Students create skeletons to celebrate the Day of the Dead. In this visual arts lesson, students explore the importance of the Day of the Dead celebrations in the Mexican culture. They create skeletons and decorate them with paint,...
3rd - 7th Visual & Performing Arts
Día de los Muertos Teacher Packet
Learn about Dia de los Muertos, the Day of the Dead, through authentic vocabulary activities, creating Papel Picado, creating Calavera masks, and making skeleton puppets. Designed to be adaptable to many grade levels, you'll find...
K - 12th Social Studies & History
Day of the Dead ( Dia de los Muertos)
Students examine information about a previously chosen aspect of Day of the Dead and they evaluate a webpage analysis. They create a project about their aspect and prepare a presentation for the class. They complete a self-evaluation...
8th - 10th Social Studies & History | http://www.lessonplanet.com/lesson-plans/el-dia-de-los-muertos |
4.25 | Relative atomic mass
A relative atomic mass (also called atomic weight; symbol: Ar) is a measure of how heavy atoms are. It is the ratio of the average mass per atom of an element from a given sample to 1/12 the mass of a carbon-12 atom. In other words, a relative atomic mass tells you the number of times an average atom of an element from a given sample is heavier than one-twelfth of an atom of carbon-12. The word relative in relative atomic mass refers to this scaling relative to carbon-12. Relative atomic mass values are ratios expressed as dimensionless numbers, numbers with no units.:1 Relative atomic mass is the same as atomic weight, which is the older term.
The number of protons an atom has defines what element it is. However, most elements in nature consist of atoms with different numbers of neutrons.:17 An atom of an element with a certain number of neutrons is called an isotope. For example, the element thallium has two common isotopes: thallium-203 and thallium-205. Both isotopes of thallium have 81 protons, but thallium-205 has 124 neutrons, 2 more than thallium-203, which has 122. Each isotope has its own mass, called its isotopic mass. A relative isotopic mass is the mass of an isotope relative to 1/12 the mass of a carbon-12 atom. The relative isotopic mass of an isotope is roughly the same as its mass number, which is the number of protons and neutrons in the nucleus. Like relative atomic mass values, relative isotopic mass values are ratios with no units.
We can find the relative atomic mass of a sample of an element by working out the abundance-weighted mean of the relative isotopic masses.:17 For example, if a sample of thallium is made up of 30% thallium-203 and 70% thallium-205,
Two samples of an element that consists of more than one isotope, collected from two widely spaced sources on Earth, are expected to have slightly different relative atomic masses. This is because the proportions of each isotope are slightly different at different locations.
A standard atomic weight is the mean value of relative atomic masses of a number of normal samples of the element. Standard atomic weight values are published at regular intervals by the Commission on Isotopic Abundances and Atomic Weights of the International Union of Pure and Applied Chemistry (IUPAC). The standard atomic weight for each element is on the periodic table.
Often, the term relative atomic mass is used to mean standard atomic weight. This is not quite correct, because relative atomic mass is a less specific term that refers to individual samples. Individual samples of an element could have a relative atomic mass different to the standard atomic weight for the element. For example, a sample from another planet could have a relative atomic mass very different to the standard Earth-based value.
Relative atomic mass is not the same as:
- atomic mass (symbol: ma), which is the mass of a single atom, commonly expressed in unified atomic mass units
- mass number (symbol: A), which is the sum of the number of protons and the number of neutrons in the nucleus of an atom
- atomic number (symbol: Z), which is the number of protons in the nucleus of an atom.
References[change | change source]
- "Atomic weight: The Name, its History, Definition, and Units". Commission on Isotopic Abundances and Atomic Weights of the International Union of Pure and Applied Chemistry. https://web.archive.org/web/20131215231731/http://www.ciaaw.org/atomic_weights2.htm. Retrieved 2016-01-07.
- Daintith, John, ed. (2008). A Dictionary of Chemistry (Sixth ed.). Oxford University Press. p. 457. ISBN 978-0-19-920463-2.
- Salters Advanced Chemistry: Chemical Ideas (Third ed.). Heinemann. 2008. ISBN 978-0-435631-49-9.
- Moore, John T. (2010). Chemistry Essentials For Dummies. Wiley. p. 44. ISBN 978-0-470-61836-3. | https://simple.wikipedia.org/wiki/Relative_atomic_mass |
4.46875 | Long Vowel Teacher Resources
Find Long Vowel educational ideas and activities
Showing 1 - 20 of 891 resources
Phonics Instruction: Long Vowel Sound, Silent E
Students explore language arts by participating in a class word identification game. In this phonics lesson, students read several words in class and identify the different sounds between short and long vowel words. Students complete a...
2nd - 4th Visual & Performing Arts
We Flew With a Baboon
More vowels? Elementary schoolers recognize how vowel patterns change a short vowel sound into a long vowel sound. With an emphasis on the /oo/ that makes the long U sound, kids identify the phoneme and letter combination through...
1st - 2nd English Language Arts CCSS: Adaptable
Practicing Short and Long Vowel Sounds
What are the differences between short and long vowel sounds? The class participates in a teacher ledlesson plan in which they add letters to words as they evolve from a three letter, short vowel word to a longer long vowel word. They...
2nd English Language Arts CCSS: Adaptable | http://www.lessonplanet.com/lesson-plans/long-vowel |
4 | The largest single-dish radio telescope in the world. It came into operation in 1963 and is operated by Cornell University for the National Science Foundation. Occupying a large karst sinkhole in the hills south of Arecibo in Puerto Rico, its area of almost 9 hectares is greater than that of all other such instruments in the world combined. The surface of Arecibo's 305-meter (1,000-foot) fixed, spherical dish is made from almost 40,000 perforated aluminum panels, each measuring 1 meter by 2 meters (3 feet by 6 feet), supported by a network of steel cables strung across the underlying depression. Suspended 150 meters (450 feet) above the reflector is a 900-ton platform which houses the receiving equipment. Although the telescope is not steerable, some directionality is obtained by moving the feed antenna (upgraded in 1996). The immense size and accurate configuration of the dish allows extremely faint signals to be detected. For this reason, it has been used extensively in SETI investigations and in the first attempt at CETI (see Arecibo Message). It also featured in the film Contact. The giant radio telescope dish at the Arecibo Observatory in Puerto Rico is nestled in a natural sinkhole. The Search for Extraterrestrial Intelligence (SETI) has been using the telescope to search for radio signals from space since 1992. The Arecibo radio telescope is located in Puerto Rico about 10 km south of the town of Arecibo, which is located on the north coast of the island. It is operated by Cornell University under cooperative agreement with the National Science Foundation. Arecibo is one of the most famous such telescopes in the world, distinguished by its enormous size; the main collecting dish is 305 meters in diameter, constructed inside the depression left by a karst sinkhole. It is the largest curved focusing dish on Earth, giving it the largest photon-gathering capacity. Arecibo's dish surface is made of 38,778 perforated aluminum panels, each measuring about 3 feet by 6 feet, supported by a mesh of steel cables. It is a spherical reflector (as opposed to a parabolic reflector). This form is due to the method used to aim the telescope; Arecibo's dish is fixed in place, but the receiver at its focal point is repositioned to intercept signals reflected from different directions by the spherical dish surface. The receiver is located on a 900-ton platform which is suspended 450 feet in the air above the dish by 18 cables running from three reinforced concrete towers, one of which is 365 feet high and the other two of which are 265 feet high (the tops of the three towers are at the same elevation). The platform has a 93 meter long rotating bow-shaped track called the azimuth arm on which receiving antennae, secondary and tertiary reflectors are mounted. This allows the telescope to observe any region of the sky within a forty degree cone of visibility about the local zenith (between -1 and 38 degrees of declination). Puerto Rico's location near the equator allows Arecibo to view all of the planets in the solar system. The construction of Arecibo was initiated by Professor William E. Gordon of Cornell University, who originally intended to use it for the study of Earth's ionosphere. Originally, a fixed parabolic reflector was envisioned, pointing in a fixed direction with a 500 foot tower to hold equipment at the focus. This design would have had a very limited use for other potential areas of research, such as planetology and radio astronomy, which require the ability to point at different positions in the sky and to track those positions for an extended period as Earth rotates. Ward Low of ARPA pointed out this flaw, and put Gordon in touch with the Air Force Cambridge Research Laboratory (AFCRL) in Boston, Massachusetts where a group headed by Phil Blacksmith was working on spherical reflectors and another group was studying the propagation of radio waves in and through the upper atmosphere. Cornell University proposed the project to ARPA in the summer of 1958 and a contract was signed between the AFCRL and the University in November of 1959. Construction began in the summer of 1960, with the official opening taking place on November 1, 1963. Arecibo has been instrumental in many significant scientific discoveries. On April 7 1964, shortly after its inauguration, Gordon H. Pettengill's team used Arecibo to determine that the rotation rate of Mercury was not 88 days, as previously thought, but only 59 days. Arecibo also had military intelligence uses, for example locating Soviet radar installations by detecting their signals bouncing back off of the Moon. Arecibo has undergone several significant upgrades over its lifespan. | http://structurae.net/structures/arecibo-radio-telescope |
4.125 | One of the smallest dinosaur skulls ever discovered has been identified and described by a team of scientists from London, Cambridge and Chicago. The skull would have been only 45 millimeters (less than two inches) in length. It belonged to a very young Heterodontosaurus, an early dinosaur. This juvenile weighed about 200 grams, less than two sticks of butter.
In the Fall issue of the Journal of Vertebrate Paleontology, the researchers describe important findings from this skull that suggest how and when the ornithischians, the family of herbivorous dinosaurs that includes Heterodontosaurus, made the transition from eating meat to eating plants.
"It's likely that all dinosaurs evolved from carnivorous ancestors," said study co-author Laura Porro, a post-doctoral student at the University of Chicago. "Since heterodontosaurs are among the earliest dinosaurs adapted to eating plants, they may represent a transition phase between meat-eating ancestors and more sophisticated, fully-herbivorous descendents."
The teeth suggest Heterodontosaurus practiced occasional omnivory: the canines were used for defense or for adding small animals such as insects to a diet composed mainly of plants. Credit: Natural History Museum
"This juvenile skull," she added, "indicates that these dinosaurs were still in the midst of that transition."
Heterodontosaurus lived during the Early Jurassic period (about 190 million years ago) of South Africa. Adult Heterodontosaurs were turkey-sized animals, reaching just over three feet in length and weighing around five to six pounds.
Because their fossils are very rare, Heterodontosaurus and its relatives (the heterodontosaurs) are poorly understood compared to later and larger groups of dinosaurs.
"There were only two known fossils of Heterodontosaurus, both in South Africa and both adults," said Porro, who is completing her doctoral dissertation on feeding in Heterodontosaurus under the supervision of David Norman, researcher at the University of Cambridge and co-author of the study. "There were rumors of a juvenile heterodontosaur skull in the collection of the South African Museum," she said, "but no one had ever described it."
Study co-author Laura Porro, a post-doctoral student at the University of Chicago, in the lab with a model skull from a full sized Heterodontosaurus. Credit: University of Chicago Medical Center
As part of her research, Porro visited the Iziko South African Museum, Cape Town, to examine the adult fossils. When she was there, she got permission to "poke around" in the Museum's collections. While going through drawers of material found during excavations in the 1960s, she found two more heterodontosaur fossils, including the partial juvenile skull.
"I didn't recognize it as a dinosaur at first," she said, "but when I turned it over and saw the eye looking straight at me, I knew exactly what it was."
"This discovery is important because for the first time we can examine how Heterodontosaurus changed as it grew," said the study's lead author, Richard Butler of the Natural History Museum, in London. "The juvenile Heterodontosaurus had relatively large eyes and a short snout when compared to an adult," he said, "similar to the differences we see between puppies and fully-grown dogs."
A specialist on the mechanics of feeding, Porro was particularly interested in the new fossil's teeth. Heterodontosaurs, which means "different-toothed lizards," have an unusual combination of teeth, with large fang-like canines at the front of their jaws and worn, molar-like grinding teeth at the back. In contrast, most reptiles have teeth which change little in shape along the length of the jaw.
This bizarre suite of teeth has led to debate over what heterodontosaurs ate. Some scientists think heterodontosaurs were omnivores who used their differently-shaped teeth to eat both plants and small animals. Others contend that heterodontosaurs were herbivores who ate only plants and that the canines were sexually dimorphic--present only in males, as in living warthogs. In that scenario, the canines could have been used as weapons by rival males in disputes over mates and territories.
Porro and colleagues found that the juvenile already had a fully-developed set of canines.
"The fact that canines are present at such an early stage of growth strongly suggests that this is not a sexually dimorphic character because such characters tend to appear later in life," said Butler.
Instead, the researchers suspect that the canines were used as defensive weapons against predators, or for adding occasional small animals such as insects, small mammals and reptiles to a diet composed mainly of plants--what the authors refers to as "occasional omnivory."
The study created a new mystery, however. With the aid of X-rays and CT scans, Porro found a complete lack of replacement teeth in the adult and juvenile skulls.
Most reptiles, including living crocodiles and lizards, replace their teeth constantly throughout their lives, so that sharp, unworn teeth are always available. The same was true for dinosaurs. Most mammals, on the other hand, replace their teeth only once during their lives, allowing the upper and lower teeth to develop a tight, precise fit.
Heterodontosaurus was more similar to mammals, not only in the specialized, variable shape of its teeth but also in replacing its teeth slowly, if at all, and developing tight tooth-to-tooth contact. "Tooth replacement must have occurred during growth," the authors conclude, "however, evidence of continuous tooth replacement appears to be absent, in both adult and juvenile specimens."
The research was funded by the Royal Society, Cambridge University and the Gates Cambridge Trust.
- PHYSICAL SCIENCES
- EARTH SCIENCES
- LIFE SCIENCES
- SOCIAL SCIENCES
Subscribe to the newsletter
Stay in touch with the scientific world!
Know Science And Want To Write?
- Pseudoscience Creeping Into Your Conference? A Case In GMOs And Glyphosate
- My Thoughts On The LIGO-VIRGO Result
- Henri Poincaré Predicted The Existence Of Gravitational Waves As Early As June 5, 1905
- Are Dark Matter Scientists About To Prove Its Existence?
- Gravitational Waves? Watch the LIGO press conference at 10:30 Eastern.
- Beyond Diamonds And Gems: The World's Rarest Minerals
- The power of GW150914
- "Hi Dan, I referrred to what described in https://www.ligo.caltech.edu/page/ligos-ifo . From that..."
- "It's to be remembered that the Irish had small kingdoms in Wales and Cornwall as well as Scotland..."
- "Thank you for this comment.Raymond Poincaré (1860-1934) was the first cousin of Henri Poincaré..."
- "I like the argument on how evolving the entertainment technology are, the human being still enjoy..."
- "The power flux (power per unit area) of plane gravitational waves can be derived from Einstein's..."
- Fluorine: The Element From Hell
- Beard Microbiology: Grubby Hipsters May Be On To Something
- Water Tops the List of Health Concerns for Competitive Eaters
- Natural Flavors Are More Radioactive Than Artificial Ones.
- Bariatric Surgery Beneficial Even for Older People
- Opiates No Better at Easing Knee Osteoarthritis Pain
- Speech disorder called apraxia can progress to neurodegenerative disease
- Market integration could help offset climate-related food insecurity
- What values are important to scientists?
- Drones for research: DePaul University archaeologist to explain UAV use at Fifa
- Does living near an oil or natural gas well affect your drinking water? | http://www.science20.com/news_releases/heterodontosaurus_tiny_two_inch_dinosaur_has_big_insight_evolution_plant_eaters |
4.1875 | Swedish emigration to the United States
During the Swedish emigration to the United States in the 19th and early 20th centuries, about 1.3 million Swedes left Sweden for the United States. The main pull was the availability of low cost, high quality farm land in the upper Midwest (the area from Illinois to Montana), and high paying jobs in mechanical industries and factories in Chicago, Minneapolis, Worcester and many smaller cities. Religious freedom was also a pull factor for some. Most migration was of the chain form, with early settlers giving reports and recommendations (and travel money) to relatives and friends in Sweden, who followed the same route to new homes. A major push factor inside Sweden was population growth and the growing shortage of good farm lands. Additional factors in the earliest stages of emigration included crop failures, the lack of industrial jobs in urban Sweden, and for some the wish to escape the authority of an established state church. After 1870, transatlantic fares were cheap. By the 1880s, American railroads had agents in Sweden who offered package deals on one-way tickets for entire families. The railroad would ship the family, their house furnishings and farm tools, and provide a financial deal to spread out payments for the farm over a period of years.
Swedish migration peaked 1870-1900. By 1890, the U.S. census reported a Swedish-American population of nearly 800,000. Many of the immigrants became classic pioneers, clearing and cultivating the prairies of the Great Plains, while others remained in the cities, particularly Chicago. Single young women usually went straight from agricultural work in the Swedish countryside to jobs as housemaids. Many established Swedish Americans visited the old country in the later 19th century, their narratives illustrating the difference in customs and manners. Some made the journey with the intention of spending their declining years in Sweden.
After a dip in the 1890s, emigration rose again, causing national alarm in Sweden. At this time, Sweden's economy had developed substantially, but the higher wages prevailing in the United States retained their attractiveness. A broad-based parliamentary emigration commission was instituted in 1907. It recommended social and economic reform in order to reduce emigration by "bringing the best sides of America to Sweden". The commission's major proposals were rapidly implemented: universal male suffrage, better housing, general economic development, and broader popular education, measures which also can be attributed to numerous other factors. The effect of these measures on migration is hard to assess, as World War I (1914–1918) broke out the year after the commission published its last volume, reducing emigration to a mere trickle. From the mid-1920s, there was no longer a Swedish mass emigration.
- 1 Early history: the Swedish-American dream
- 2 19th century
- 3 20th century
- 4 Swedish Americans
- 5 See also
- 6 Notes
- 7 References
- 8 External links
Early history: the Swedish-American dream
The Swedish West India Company established a colony on the Delaware River in 1638, naming it New Sweden. A small, short-lived colonial settlement, New Sweden contained at its height only some 600 Swedish and Finnish settlers (Finland being part of Sweden). It was lost to the Dutch in New Netherland in 1655. Nevertheless, the descendants of the original colonists maintained spoken Swedish until the late 18th century. Modern day reminders of the history of New Sweden are reflected in the presence of the American Swedish Historical Museum in Philadelphia, Fort Christina State Park in Wilmington, Delaware, and The Printzhof in Essington, Pennsylvania.
The historian H. A. Barton has suggested that the greatest significance of New Sweden was the strong and long-lasting interest in America that the colony generated in Sweden. America was seen as the standard-bearer of liberalism and personal freedom, and became an ideal for liberal Swedes. Their admiration for America was combined with the notion of a past Swedish Golden Age with ancient Nordic ideals. Supposedly corrupted by foreign influences, the timeless "Swedish values" would be recovered by Swedes in the New World. This remained a fundamental theme of Swedish, and later Swedish-American, discussion of America, though the recommended "timeless" values changed over time. In the 17th and 18th centuries, Swedes who called for greater religious freedom would often refer to America as the supreme symbol of it. The emphasis shifted from religion to politics in the 19th century, when liberal citizens of the hierarchic Swedish class society looked with admiration to the American Republicanism and civil rights. In the early 20th century, the Swedish-American dream even embraced the idea of a welfare state responsible for the well-being of all its citizens. Underneath these shifting ideas ran from the start the current which carried all before it in the later 20th century: America as the symbol and dream of unfettered individualism.
Swedish debate about America remained mostly theoretical before the 19th century, since very few Swedes had any personal experience of the nation. Emigration was illegal and population was seen as the wealth of nations. However, the Swedish population doubled between 1750 and 1850, and as population growth outstripped economic development, it gave rise to fears of overpopulation based on the influential population theory of Thomas Malthus. In the 1830s, the laws against emigration were repealed.
Akenson argues that hard times in Sweden before 1867 produced a strong push effect, but that for cultural reasons most Swedes refused to emigrate and clung on at home. Akenson says the state wanted to keep its population high and:
- The upper classes' need for a cheap and plentiful labor force, the instinctive willingness of the clergy of the state church to discourage emigration on both moral and social grounds, and the deference of the lower orders to the arcade of powers that hovered above them—all these things formed an architecture of cultural hesitancy concerning emigration.
A few "countercultural" deviants from the mainstream did leave and showed the way. The severe economic hardship of the "Great Deprivation" of 1867 to 1869, finally overcame the reluctance and the floodgates opened to produce an "emigration culture""
European mass emigration: push and pull
Large-scale European emigration to the United States started in the 1840s in Britain, Ireland and Germany. That was followed by a rising wave after 1850 from most Northern European countries, and in turn by Central and Southern Europe. Research into the forces behind this European mass emigration has relied on sophisticated statistical methods. One theory which has gained wide acceptance is Jerome's analysis in 1926 of the "push and pull" factors—the impulses to emigration generated by conditions in Europe and the U.S. respectively. Jerome found that fluctuations in emigration co-varied more with economic developments in the U.S. than in Europe, and deduced that the pull was stronger than the push. Jerome's conclusions have been challenged, but still form the basis of much work on the subject.
Emigration patterns in the Nordic countries—Finland, Sweden, Norway, Denmark, and Iceland—show striking variation. Nordic mass emigration started in Norway, which also retained the highest rate throughout the century. Sweden got underway in the early 1840s, and had the third-highest rate in all of Europe, after Ireland and Norway. Denmark had a consistently low rate of emigration, while Iceland had a late start but soon reached levels comparable to Norway. Finland, whose mass emigration did not start until the late 1880s, and at the time part of the Russian Empire, is usually classified as part of the Eastern European wave.
Crossing the Atlantic
The first European emigrants travelled in the holds of sailing cargo ships. With the advent of the age of steam, an efficient transatlantic passenger transport mechanism was established at the end of the 1860s. It was based on huge ocean liners run by international shipping lines, most prominently Cunard, White Star, and Inman. The speed and capacity of the large steamships meant that tickets became cheaper. From the Swedish port towns of Stockholm, Malmö and Gothenburg, transport companies operated various routes, some of them with complex early stages and consequently a long and trying journey on the road and at sea. Thus North German transport agencies relied on the regular Stockholm—Lübeck steamship service to bring Swedish emigrants to Lübeck, and from there on German train services to take them to Hamburg or Bremen. There they would board ships to the British ports of Southampton and Liverpool and change to one of the great transatlantic liners bound for New York. The majority of Swedish emigrants, however, travelled from Gothenburg to Hull, UK, on dedicated boats run by the Wilson Line, then by train across Britain to Liverpool and the big ships.
During the later 19th century, the major shipping lines financed Swedish emigrant agents and paid for the production of large quantities of emigration propaganda. Much of this promotional material, such as leaflets, was produced by immigration promoters in the U.S. Propaganda and advertising by shipping line agents was often blamed for emigration by the conservative Swedish ruling class, which grew increasingly alarmed at seeing the agricultural labor force leave the country. It was a Swedish 19th-century cliché to blame the falling ticket prices and the pro-emigration propaganda of the transport system for the craze of emigration, but modern historians have varying views about the real importance of such factors. Brattne and Åkerman have examined the advertising campaigns and the ticket prices as a possible third force between push and pull. They conclude that neither advertisements nor pricing had any decisive influence on Swedish emigration. While the companies remain unwilling, as of 2007, to open their archives to researchers, the limited sources available suggest that ticket prices did drop in the 1880s, but remained on average artificially high because of cartels and price-fixing. On the other hand, H. A. Barton states that the cost of crossing the Atlantic dropped drastically between 1865 and 1890, encouraging poorer Swedes to emigrate. The research of Brattne and Åkerman has shown that the leaflets sent out by the shipping line agents to prospective emigrants would not so much celebrate conditions in the New World, as simply emphasize the comforts and advantages of the particular company. Descriptions of life in America were unvarnished, and the general advice to emigrants brief and factual. Newspaper advertising, while very common, tended to be repetitive and stereotyped in content.
Swedish mass migration took off in the spring of 1841 with the departure of Uppsala University graduate Gustaf Unonius (1810–1902) together with his wife, a maid, and two students. This small group founded a settlement they named New Upsala in Waukesha County, Wisconsin, and began to clear the wilderness, full of enthusiasm for frontier life in "one of the most beautiful valleys the world can offer". After moving to Chicago, Unonius soon became disillusioned with life in the U.S., but his reports in praise of the simple and virtuous pioneer life, published in the liberal newspaper Aftonbladet, had already begun to draw Swedes westward.
The rising Swedish exodus was caused by economic, political, and religious conditions affecting particularly the rural population. Europe was in the grip of an economic depression. In Sweden, population growth and repeated crop failures were making it increasingly difficult to make a living from the tiny land plots on which at least three quarters of the inhabitants depended. Rural conditions were especially bleak in the stony and unforgiving Småland province, which became the heartland of emigration. The American Midwest was an agricultural antipode to Småland, for it, Unonius reported in 1842, "more closely than any other country in the world approaches the ideal which nature seems to have intended for the happiness and comfort of humanity." Prairie land in the Midwest was ample, loamy, and government-owned. From 1841 it was sold to squatters for $1.25 per acre, ($29 per acre ($72/ha) as of 2016), following the Preemption Act of 1841 (later replaced by the Homestead Act). The inexpensive and fertile land of Illinois, Iowa, Minnesota and Wisconsin was irresistible to landless and impoverished European peasants. It also attracted more well-established farmers.
The political freedom of the American republic exerted a similar pull. Swedish peasants were some of the most literate in Europe, and consequently had access to the European egalitarian and radical ideas that culminated in the Revolutions of 1848. The clash between Swedish liberalism and a repressive monarchist regime raised political awareness among the disadvantaged, many of whom looked to the U.S. to realize their republican ideals.
Dissenting religious practitioners also widely resented the treatment they received from the Lutheran State Church through the Conventicle Act. Conflicts between local worshipers and the new churches were most explosive in the countryside, where dissenting pietist groups were more active, and were more directly under the eye of local law enforcement and the parish priest. Before non-Lutheran churches were granted toleration in 1809, clampdowns on illegal forms of worship and teaching often provoked whole groups of pietists to leave together, intent on forming their own spiritual communities in the new land. The largest contingent of such dissenters, 1,500 followers of Eric Jansson, left in the late 1840s and founded a community in Bishop Hill, Illinois.
The first Swedish emigrant guidebook was published as early as 1841, the year Unonius left, and nine handbooks were published between 1849 and 1855. Substantial groups of lumberjacks and iron miners were recruited directly by company agents in Sweden. Agents recruiting construction builders for American railroads also appeared, the first in 1854, scouting for the Illinois Central Railroad.
The Swedish establishment disapproved intensely of emigration. Seen as depleting the labor force and as a defiant act among the lower orders, emigration alarmed both the spiritual and the secular authorities. Many emigrant diaries and memoirs feature an emblematic early scene in which the local clergy warns travellers against risking their souls among foreign heretics. The conservative press described emigrants as lacking in patriotism and moral fibre: "No workers are more lazy, immoral and indifferent than those who immigrate to other places." Emigration was denounced as an unreasoning "mania" or "craze", implanted in an ignorant populace by "outside agents". The liberal press retorted that the "lackeys of monarchism" failed to take into account the miserable conditions in the Swedish countryside and the backwardness of Swedish economic and political institutions. "Yes, emigration is indeed a 'mania'", wrote the liberal Göteborgs Handels- och Sjöfartstidning sarcastically, "The mania of wanting to eat one's fill after one has worked oneself hungry! The craze of wanting to support oneself and one's family in an honest manner!"
The great Famine of 1866–68, and the distrust and discontent concerning the way the establishment distributed the relief help, is estimated to have contributed greatly to the raising Swedish emigration to the United States.
Late 19th century
Swedish emigration to the United States reached its height in the 1870-1900 era. The size of the Swedish-American community in 1865 is estimated at 25,000 people, a figure soon to be surpassed by the yearly Swedish immigration. By 1890 the U.S. census reported a Swedish-American population of nearly 800,000, with immigration peaking in 1869 and again in 1887. Most of this influx settled in the North. The great majority of them had been peasants in the old country, pushed away from Sweden by disastrous crop failures and pulled towards America by the cheap land resulting from the 1862 Homestead Act. Most immigrants became pioneers, clearing and cultivating the virgin land of the Midwest and extending the pre-Civil War settlements further west, into Kansas and Nebraska. Once sizable Swedish farming communities had formed on the prairie, the greatest impetus for further peasant migration came through personal contacts. The iconic "America-letter" to relatives and friends at home spoke directly from a position of trust and shared background, carrying immediate conviction. At the height of migration, familial America-letters could lead to chain reactions which would all but depopulate some Swedish parishes, dissolving tightly knit communities which then re-assembled in the Midwest.
Other forces worked to push the new immigrants towards the cities, particularly Chicago. According to historian H. Arnold Barton, the cost of crossing the Atlantic dropped by more than half between 1865 and 1890, which led to progressively poorer Swedes contributing a growing share of immigration (but compare Brattne and Åkerman, see "Crossing the Atlantic" above). The new immigrants were increasingly younger and unmarried. With the shift from family to individual immigration came a faster and fuller Americanization, as young, single individuals with little money took whatever jobs they could get, often in cities. Large numbers even of those who had been farmers in the old country made straight for American cities and towns, living and working there at least until they had saved enough capital to marry and buy farms of their own. A growing proportion stayed in urban centers, combining emigration with the flight from the countryside which was happening in the homeland and all across Europe.
Single young women, a group Barton considers particularly significant, most commonly moved straight from field work in rural Sweden to jobs as live-in housemaids in urban America. "Literature and tradition have preserved the often tragic image of the pioneer immigrant wife and mother", writes Barton, "bearing her burden of hardship, deprivation and longing on the untamed frontier ... More characteristic among the newer arrivals, however, was the young, unmarried woman ... As domestic servants in America, they ... were treated as members of the families they worked for and like 'ladies' by American men, who showed them a courtesy and consideration to which they were quite unaccustomed at home." They found employment easily, as Scandinavian maids were in high demand, and learned the language and customs quickly. In contrast, newly arrived Swedish men were often employed in all-Swedish work gangs. The young women usually married Swedish men, and brought with them in marriage an enthusiasm for ladylike, American manners and middle-class refinements. Many admiring remarks are recorded from the late 19th century about the sophistication and elegance that simple Swedish farm girls would gain in a few years, and about their unmistakably American demeanor.
As ready workers, the Swedes were generally welcomed by the Americans, who often singled them out as the "best" immigrants. There was no significant anti-Swedish nativism of the sort that attacked Irish, German and, especially, Chinese newcomers. The Swedish style was more familiar: "They are not peddlers, nor organ grinders, nor beggars; they do not sell ready-made clothing nor keep pawn shops", wrote the Congregational missionary M. W. Montgomery in 1885; "they do not seek the shelter of the American flag merely to introduce and foster among us ... socialism, nihilism, communism ... they are more like Americans than are any other foreign peoples."
A number of well-established and longtime Swedish Americans visited Sweden in the 1870s, making comments that give historians a window on the cultural contrasts involved. A group from Chicago made the journey in an effort to remigrate and spend their later years in the country of their birth, but changed their minds when faced with the realities of 19th-century Swedish society. Uncomfortable with what they described as the social snobbery, pervasive drunkenness, and superficial religious life of the old country, they returned promptly to America. The most notable visitor was Hans Mattson (1832–1893), an early Minnesota settler who had served as a colonel in the Union Army and had been Minnesota's secretary of state. He visited Sweden in 1868–69 to recruit settlers on behalf of the Minnesota Immigration Board, and again in the 1870s to recruit for the Northern Pacific Railroad. Viewing Swedish class snobbery with indignation, Mattson wrote in his Reminiscences that this contrast was the key to the greatness of America, where "labor is respected, while in most other countries it is looked down upon with slight". He was sardonically amused by the ancient pageantry of monarchy at the ceremonial opening of the Riksdag: "With all respects for old Swedish customs and manners, I cannot but compare this pageant to a great American circus—minus the menagerie, of course."
Mattson's first recruiting visit came immediately after consecutive seasons of crop failure in 1867 and 1868, and he found himself "besieged by people who wished to accompany me back to America." He noted that:
the laboring and middle classes already at that time had a pretty correct idea of America, and the fate that awaited emigrants there; but the ignorance, prejudice and hatred toward America and everything pertaining to it among the aristocracy, and especially the office holders, was as unpardonable as it was ridiculous. It was claimed by them that all was humbug in America, that it was the paradise of scoundrels, cheats, and rascals, and that nothing good could possibly come out of it.
A more recent American immigrant, Ernst Skarstedt, who visited Sweden in 1885, received the same galling impression of upper-class arrogance and anti-Americanism. The laboring classes, in their turn, appeared to him coarse and degraded, drinking heavily in public, speaking in a stream of curses, making obscene jokes in front of women and children. Skarstedt felt surrounded by "arrogance on one side and obsequiousness on the other, a manifest scorn for menial labor, a desire to appear to be more than one was". This traveller too was incessantly hearing American civilization and culture denigrated from the depths of upper-class Swedish prejudice: "If I, in all modesty, told something about America, it could happen that in reply I was informed that this could not possibly be so or that the matter was better understood in Sweden."
Swedish emigration dropped dramatically after 1890; return migration rose as conditions in Sweden improved. Sweden underwent a rapid industrialization within a few years in the 1890s, and wages rose, principally in the fields of mining, forestry, and agriculture. The pull from the U.S. declined even more sharply than the Swedish "push", as the best farmland was taken. No longer growing but instead settling and consolidating, the Swedish-American community seemed set to become ever more American and less Swedish. The new century, however, saw a new influx.
Parliamentary Emigration Commission 1907–1913
Emigration rose again at the turn of the 20th century, reaching a new peak of about 35,000 Swedes in 1903. Figures remained high until World War I, alarming both conservative Swedes, who saw emigration as a challenge to national solidarity, and liberals, who feared the disappearance of the labor force necessary for economic development. One-fourth of all Swedes had made the United States their home, and a broad national consensus mandated that a Parliamentary Emigration Commission study the problem in 1907. Approaching the task with what Barton calls "characteristic Swedish thoroughness", the Commission published its findings and proposals in 21 large volumes. The Commission rejected conservative proposals for legal restrictions on emigration and in the end supported the liberal line of "bringing the best sides of America to Sweden" through social and economic reform. Topping the list of urgent reforms were universal male suffrage, better housing, and general economic development. The Commission especially hoped that broader popular education would counteract "class and caste differences"
Class inequality in Swedish society was a strong and recurring theme in the Commission's findings. It appeared as a major motivator in the 289 personal narratives included in the report. These documents, of great research value and human interest today, were submitted by Swedes in Canada and the U.S. in response to requests in Swedish-American newspapers. The great majority of replies expressed enthusiasm for their new homeland and criticized conditions in Sweden. Bitter experiences of Swedish class snobbery still rankled after sometimes 40–50 years in America. Writers recalled the hard work, pitiful wages, and grim poverty of life in the Swedish countryside. One woman wrote from North Dakota of how in her Värmland home parish, she had had to earn her living in peasant households from the age of eight, starting work at four in the morning and living on "rotten herring and potatoes, served out in small amounts so that I would not eat myself sick". She could see "no hope of saving anything in case of illness", but rather could see "the poorhouse waiting for me in the distance". When she was seventeen, her emigrated brothers sent her a prepaid ticket to America, and "the hour of freedom struck"
A year after the Commission published its last volume, World War I began and reduced emigration to a mere trickle. From the 1920s, there was no longer a Swedish mass emigration. The influence of the ambitious Emigration Commission in solving the problem is still a matter of debate. Franklin D. Scott has argued in an influential essay that the American Immigration Act of 1924 was the effective cause. Barton, by contrast, points to the rapid implementation of essentially all the Commission's recommendations, from industrialization to an array of social reforms. He maintains that its findings "must have had a powerful cumulative effect upon Sweden's leadership and broader public opinion".
The Midwest remained the heartland of the Swedish-American community, but its position weakened in the 20th century: in 1910, 54% of the Swedish immigrants and their children lived in the Midwest, 15% in industrial areas in the East, and 10% on the West Coast. Chicago was effectively the Swedish-American capital, accommodating about 10% of all Swedish Americans—more than 100,000 people—making it the second-largest Swedish city in the world (only Stockholm had more Swedish inhabitants).
Defining themselves as both Swedish and American, the Swedish-American community retained a fascination for the old country and their relationship to it. The nostalgic visits to Sweden which had begun in the 1870s continued well into the 20th century, and narratives from these trips formed a staple of the lively Swedish-American publishing companies. The accounts testify to complex feelings, but each contingent of American travellers were freshly indignant at Swedish class pride and Swedish disrespect for women. It was with renewed pride in American culture that they returned to the Midwest.
In the 2000 U.S. Census, about four million Americans claimed to have Swedish roots. Minnesota remains by a wide margin the state with the most inhabitants of Swedish descent—9.6% of the population as of 2005.
The best-known artistic representation of the Swedish mass migration is the epic four-novel suite The Emigrants (1949–1959) by Vilhelm Moberg (1898–1973). Portraying the lives of an emigrant family through several generations, the novels have sold nearly two million copies in Sweden and have been translated into more than twenty languages. The tetralogy has been filmed by Jan Troell as The Emigrants (1971) and The New Land (1972), and forms the basis of Kristina from Duvemåla, a 1995 musical by former ABBA members Benny Andersson and Björn Ulvaeus.
In Sweden, the Småland city of Växjö is home to the Swedish Emigrant Institute (Svenska Emigrantinstitutet), founded in 1965 "to preserve records, interviews, and memorabilia relating to the period of major Swedish emigration between 1846 and 1930". The House of the Emigrants (Emigranternas Hus) was founded in Gothenburg, the main port for Swedish emigrants, in 2004. The centre shows exhibitions on migration and has a research hall for genealogy. In the U.S., there are hundreds of active Swedish-American organizations as of 2007, for which the Swedish Council of America functions as an umbrella group. There are Swedish-American museums in Philadelphia, Chicago, Minneapolis, and Seattle. Rural cemeteries such as the Moline Swedish Lutheran Cemetery in central Texas also serve as a valuable record of the first Swedish people to come to America.
- Nordstjernan (newspaper)
- American Swedish Historical Museum
- American Swedish Institute
- Swedish colonization of the Americas
- Swedish language in the United States
- Swedish-American relations
- Barton, A Folk Divided, 5–7.
- Kälvemark, 94–96.
- See Beijbom, "Review".
- Barton, A Folk Divided, 11.
- Donald Harman Akenson, Ireland, Sweden, and the Great European Migration, 1815-1914 (McGill-Queen's University Press; 2011) p 70
- The pictures originally illustrated a cautionary tale published in 1869 in the Swedish periodical Läsning för folket, the organ of the Society for the Propagation of Useful Knowledge (Sällskapet för nyttiga kunskapers spridande). See Barton, A Folk Divided, 71.
- Akenson, Ireland, Sweden, and the Great European Migration, 1815-1914 pp 37-39
- Åkerman, passim.
- Norman, 150–153.
- Runblom and Norman, 315.
- Norman, passim.
- Brattne and Åkerman, 179–181.
- Brattne and Åkerman, 179–181, 186–189, 199–200.
- Barton, 38.
- Brattne and Åkerman, 187–192.
- Unonius, quoted in Barton, A Folk Divided, 13.
- Quoted in Barton, A Folk Divided, 14.
- Cipollo, 115, estimates adult literacy in Sweden at 90% in 1850, which places it highest among the European countries he has surveyed.
- Gritsch, Eric W. A History of Lutheranism. Minneapolis: Fortress Press, 2002. p. 180.
- Barton, A Folk Divided, 15–16.
- Barton, A Folk Divided, 17.
- Barton, A Folk Divided, 18.
- Proclaimed in an article in the newspaper Nya Wermlandstidningen in April 1855; quoted by Barton, A Folk Divided, 20–22.
- Göteborgs Handels- och Sjöfartstidning, 1849, quoted in Barton, A Folk Divided, 24.
- 1851, quoted and translated by Barton, A Folk Divided, 24.
- Häger, Olle; Torell, Carl; Villius, Hans (1978). Ett satans år: Norrland 1867. Stockholm: Sveriges Radio. Libris 8358120. ISBN 91-522-1529-6 (inb.)
- The exact figure is 776,093 people (Barton, A Folk Divided, 37).
- 1867 and 1868 were the worst years for crop failure, which ruined many smallholders; see Barton, A Folk Divided, 37.
- Swenson Center.
- Beijbom, "Chicago"
- Barton, A Folk Divided, 38–41.
- Barton, A Folk Divided, 41.
- Quoted by Barton, A Folk Divided, 40.
- Private letters by Anders Larsson in the 1870s, summarized by Barton, A Folk Divided, 59.
- Quoted by Barton, A Folk Divided, 60–61.
- Barton, A Folk Divided, 61–62.
- Svensk-amerikanska folket i helg och söcken (Ernst Teofil Skarstedt. Stockholm: Björck & Börjesson. 1917)
- Barton, A Folk Divided, 80.
- 1.4 million first- and second-generation Swedish immigrants lived in the U.S. in 1910, while Sweden's population at the time was 5.5 million; see Beijbom, "Review".
- Barton, A Folk Divided, 149.
- The phrase is from Ernst Beckman's original liberal parliamentary motion for instituting the Commission; quoted by Barton, A Folk Divided, 149.
- Quoted from Volume VII of the Survey by Barton, A Folk Divided, 152.
- Barton, A Folk Divided,165.
- For Swedish American publishing, see Barton, A Folk Divided, 212–213, 254.
- Barton, A Folk Divided, 103 ff.
- American FactFinder, Fact Sheet "Swedish".
- American FactFinder: Minnesota, Selected Social Characteristics in the United States, 2005.
- Moberg biography by JoAnn Hanson-Stone at the Swedish Emigrant Institute.
- "The Swedish Emigrant Institute". UtvandrarnasHus.se. Svenska Emigrantinstitutet. Archived from the original on October 5, 2013.
- House of the Emigrants.
- Scott, Larry E. "Swedish Texans". University of Texas Institute of Texan Cultures at San Antonio, 2006.
- Akenson, Donald Harman. (2011) Ireland, Sweden and the Great European Migration, 1815-1914 (McGill-Queens University Press)
- Åkerman, Sune (1976). Theories and Methods of Migration Research in Runblom and Norman, From Sweden to America, 19–75.
- American FactFinder, United States Census, 2000. Consulted 30 June 2007.
- Andersson, Benny, and Ulvaeus, Björn. Kristina from Duvemåla (musical), consulted 7 May 2007.
- Barton, H. Arnold (1994). A Folk Divided: Homeland Swedes and Swedish Americans, 1840–1940. Uppsala: Acta Universitatis Upsaliensis.
- Barton, H. Arnold Swedish America in Fifty Years—2050, a paper read to the Swedish American Historical Society on the occasion of the 1996 celebration of the Swedish Immigration Jubilee. Consulted 7 May 2007.
- Beijbom, Ulf. Chicago, the Essence of the Promised Land at the Swedish Emigrant Institute. Click on "History", then "Chicago." Consulted 6 May 2007.
- Beijbom, Ulf (1996). A Review of Swedish Emigration to America at AmericanWest.com, consulted 2 February 2007.
- Brattne, Berit, and Sune Åkerman (1976). The Importance of the Transport Sector for Mass Emigration in Runblom and Norman, From Sweden to America, 176–200.
- Cipolla, Carlo (1966). Literacy and Development in the West. Harmondsworth.
- Elovson, Harald (1930). Amerika i svensk litteratur 1750–1820. Lund.
- Glynn, Irial: Emigration Across the Atlantic: Irish, Italians and Swedes compared, 1800-1950 , European History Online, Mainz: Institute of European History, 2011, retrieved: June 16, 2011.
- Kälvemark, Ann-Sofie (1976). Swedish Emigration Policy in an International Perspective, 1840–1925, in Runblom and Norman, From Sweden to America, 94–113.
- Norman, Hans (1976). The Causes of Emigration in Runblom and Norman, From Sweden to America, 149–164.
- Runblom, Harald, and Hans Norman (eds.) (1976). From Sweden to America: A History of the Migration. Minneapolis: University of Minnesota Press.
- Scott, Franklin D. (1965). Sweden's Constructive Opposition to Emigration, Journal of Modern History, Vol. 37, No. 3. (Sep., 1965), 307–335. in JSTOR
- The Swedish Emigrant Institute. Consulted 30 June 2007.
- Swenson Center, a research institute at Augustana College, Illinois. Consulted 7 May 2007.
Media related to Immigration to the United States from Sweden at Wikimedia Commons Press
- The New Sweden Centre — museum, tours and reenactors
- The Swedish-American Historical Society is a non-profit organization founded in 1948 to "Record the Achievements of the Swedish Pioneers." The society publishes the academic journal The Swedish-American Historical Quarterly
- The Swedish Emigration to America
- The Emigrant Routes to the Promised Land in America
- The Journey To America
- Sillgatan: The Emigrant Path through Göteborg
- Story of 3 sisters emigrating to America from Sweden
- From Sweden To America 1996 CD: 23 of the 31 tracks on the vinyl release.
- From Sweden To America 1981 LP: available in digital format at iTunes and Amazon mp3.
- First Swedish Settlers in Wisconsin Wisconsin Historical Markers | https://en.wikipedia.org/wiki/Swedish_emigration_to_the_United_States |
4.0625 | Tornadoes – also known as cyclones or twisters – are rotating columns of air that run between the ground and the clouds above. Weak, short-lived tornadoes can occur when there's a strong updraft within a thunderstorm, though the most powerful and devastating twisters found in a few areas of the world require very specific conditions: a "supercell" thunderstorm with a rotating area called a mesocyclone, and winds that shear, increasing and shifting direction with height.
Although the number of reported tornadoes has increased over the past few decades, scientists believe this is simply because more are being documented (partly thanks to the rise of "storm chasing" as a hobby), rather than because climate change or any other factor has made them more frequent. This fits with the fact that US reports of violent tornadoes – the kind that are hard to miss, even without storm chasing – haven't changed significantly in the entire century-long record, holding firm at around 10–20 per year.
As for the future, there's no compelling reason to expect tornadoes to become much more frequent or intense due to global warming – though climate change could have some impact on when and where the occur. For example, it's possible that "tornado season" (generally early spring in the US South and late spring to summer in the Midwest) may shift a bit earlier, and the secondary autumn season could extend later. But it's also possible, according to recent research, that warming will reduce the frequency with which the required conditions for powerful tornadoes will co-exist. While the atmosphere is generally getting warmer and moister, which can boost the instability that fuels storms, it's also possible that the wind shear that organises tornadic storms will decrease. This could tip the balance away from tornadoes and towards other thunderstorm extremes, such as heavy rain.
The ultimate climate change FAQ
This editorial is free to reproduce under Creative Commons
This post by The Guardian is licensed under a Creative Commons Attribution-No Derivative Works 2.0 UK: England & Wales License.
Based on a work at theguardian.com | http://www.theguardian.com/environment/2011/jun/01/tornadoes-climate-change?view=mobile |
4.03125 | Dead zone (ecology)
Dead zones are hypoxic (low-oxygen) areas in the world's oceans and large lakes, caused by "excessive nutrient pollution from human activities coupled with other factors that deplete the oxygen required to support most marine life in bottom and near-bottom water. (NOAA)." In the 1970s oceanographers began noting increased instances of dead zones. These occur near inhabited coastlines, where aquatic life is most concentrated. (The vast middle portions of the oceans, which naturally have little life, are not considered "dead zones".)
In March 2004, when the recently established UN Environment Programme published its first Global Environment Outlook Year Book (GEO Year Book 2003), it reported 146 dead zones in the world's oceans where marine life could not be supported due to depleted oxygen levels. Some of these were as small as a square kilometre (0.4 mi²), but the largest dead zone covered 70,000 square kilometres (27,000 mi²). A 2008 study counted 405 dead zones worldwide.
- 1 Causes
- 2 Effects
- 3 Locations
- 4 Energy Independence and Security Act of 2007
- 5 Reversal
- 6 See also
- 7 Notes
- 8 References
- 9 Further reading
- 10 External links
Aquatic and marine dead zones can be caused by an increase in chemical nutrients (particularly nitrogen and phosphorus) in the water, known as eutrophication. These chemicals are the fundamental building blocks of single-celled, plant-like organisms that live in the water column, and whose growth is limited in part by the availability of these materials. Eutrophication can lead to rapid increases in the density of certain types of these phytoplankton, a phenomenon known as an algal bloom.
"The fish-killing blooms that devastated the Great Lakes in the 1960s and 1970s haven't gone away; they've moved west into an arid world in which people, industry, and agriculture are increasingly taxing the quality of what little freshwater there is to be had here....This isn't just a prairie problem. Global expansion of dead zones caused by algal blooms is rising rapidly...(Schindler and Vallentyne 2008) "
The major groups of algae are Cyanobacteria, Green Algae, Dinoflagellates, Coccolithophores and Diatom Algae. Increase in input of nitrogen and phosphorus generally causes Cyanobacteria to bloom and this causes Dead Zones. Cyanobacteria are not good food for zooplankton and fish and hence accumulate in water, die, and then decompose. Other algae are consumed and hence do not accumulate to the same extent as Cyanobacteria. Dead zones can be caused by natural and by anthropogenic factors. Use of chemical fertilizers is considered the major human-related cause of dead zones around the world. Natural causes include coastal upwelling and changes in wind and water circulation patterns. Runoff from sewage, urban land use, and fertilizers can also contribute to eutrophication.
Notable dead zones in the United States include the northern Gulf of Mexico region, surrounding the outfall of the Mississippi River, and the coastal regions of the Pacific Northwest, and the Elizabeth River in Virginia Beach, all of which have been shown to be recurring events over the last several years.
Additionally, natural oceanographic phenomena can cause deoxygenation of parts of the water column. For example, enclosed bodies of water, such as fjords or the Black Sea, have shallow sills at their entrances, causing water to be stagnant there for a long time. The eastern tropical Pacific Ocean and northern Indian Ocean have lowered oxygen concentrations which are thought to be in regions where there is minimal circulation to replace the oxygen that is consumed (e.g. Pickard & Emery 1982, p 47). These areas are also known as oxygen minimum zones (OMZ). In many cases, OMZs are permanent or semipermanent areas.
Remains of organisms found within sediment layers near the mouth of the Mississippi River indicate four hypoxic events before the advent of artificial fertilizer. In these sediment layers, anoxia-tolerant species are the most prevalent remains found. The periods indicated by the sediment record correspond to historic records of high river flow recorded by instruments at Vicksburg, Mississippi.
Changes in ocean circulation triggered by ongoing climate change could also add or magnify other causes of oxygen reductions in the ocean
In a study of the Gulf killifish by the Southeastern Louisiana University done in three bays along the Gulf Coast, fish living in bays where the oxygen levels in the water dropped to 1 to 2 parts per million (ppm) for three or more hours per day were found to have smaller reproductive organs. The male gonads were 34% to 50% as large as males of similar size in bays where the oxygen levels were normal (6 to 8 ppm). Females were found to have ovaries that were half as large as those in normal oxygen levels. The number of eggs in females living in hypoxic waters were only one-seventh the number of eggs in fish living in normal oxygen levels. (Landry, et al., 2004)
Fish raised in laboratory-created hypoxic conditions showed extremely low sex hormone concentrations and increased elevation of activity in two genes triggered by the hypoxia-inductile factor (HIF) protein. Under hypoxic conditions, HIF pairs with another protein, ARNT. The two then bind to DNA in cells, activating genes in those plant cells.
Under normal oxygen conditions, ARNT combines with estrogen to activate genes. Hypoxic cells in vitro did not react to estrogen placed in the tube. HIF appears to render ARNT unavailable to interact with estrogen, providing a mechanism by which hypoxic conditions alter reproduction in fish. (Johanning, et al., 2004)
It might be expected that fish would flee this potential suffocation, but they are often quickly rendered unconscious and doomed. Slow moving bottom-dwelling creatures like clams, lobsters and oysters are unable to escape. All colonial animals are extinguished. The normal re-mineralization and recycling that occurs among benthic life-forms is stifled.
Mora et al. 2013 showed that future changes in oxygen could effect most marine ecosystems and have socio-economic ramifications due to human dependency on marine goods and services.
In the 1970s, marine dead zones were first noted in European settled areas where intensive economic use stimulated scientific scrutiny: in the U.S. East Coast's Chesapeake Bay, in Scandinavia's strait called the Kattegat, which is the mouth of the Baltic Sea and in other important Baltic Sea fishing grounds, in the Black Sea, (which may have been anoxic in its deepest levels for millennia, however) and in the northern Adriatic.
|This section requires expansion. (August 2013)|
A dead zone exists in the central part of Lake Erie from east of Point Pelee to Long Point and stretches to shores in Canada and the United States. The zone has been noticed since the 1950s to 1960s, but efforts since the 1970s have been made by Canada and the US to reduce runoff pollution into the lake as means to reverse the dead zone growth. Overall the lake's oxygen level is poor with only a small area to the east of Long Point that has better levels. The biggest impact of the poor oxygen levels is to lacustrine life and fisheries industry.
Lower St. Lawrence Estuary
A dead zone exists in the Lower St. Lawrence River area from east the Saguenay River to east of Baie Comeau, greatest at depths over 275 metres (902 ft) and noticed since the 1930s. The main concerns for Canadian scientists is the impact of fish found in the area.
Off the coast of Cape Perpetua, Oregon, there is also a dead zone with a 2006 reported size of 300 square miles (780 km²). This dead zone only exists during the summer, perhaps due to wind patterns. The Oregon coast has also seen hypoxic water transporting itself from the continental shelf to the coastal embayments. This has seemed to cause intensity in several areas of Oregon's climate such as upwelled water containing oxygen concentration and upwelled winds.
Gulf of Mexico 'Dead Zone'
The area of temporary hypoxic bottom water that occurs most summers off the coast of Louisiana in the Gulf of Mexico is the largest recurring hypoxic zone in the United States. The Mississippi River, which is the drainage area for 41% of the continental United States, dumps high-nutrient runoff such as nitrogen and phosphorus into the Gulf of Mexico. According to a 2009 fact sheet created by NOAA, "seventy percent of nutrient loads that cause hypoxia are a result of this vast drainage basin." which includes the heart of U.S. agribusiness, the Midwest. The discharge of treated sewage from urban areas (pop. c 12 million in 2009) combined with agricultural runoff deliver c. 1.7 million tons of phosphorus and nitrogen into the Gulf of Mexico every year.
Size of Gulf of Mexico 'Dead Zone'
The area of hypoxic bottom water that occurs for several weeks each summer in the Gulf of Mexico has been mapped most years from 1985 through 2014. The size varies annually from a record high in 2002 when it encompassed more than 21,756 sq kilometers (8,400 square miles) to a record low in 1988 of 39 sq kilometers (15 square miles). Nancy Rabalais of the Louisiana Universities Marine Consortium in Cocodrie predicted the dead zone or hypoxic zone in 2012 will cover an area of 17,353 sq kilometers (6,700 square miles) which is larger than Connecticut; however, when the measurements were completed, the area of hypoxic bottom water in 2012 only totaled 7480 sq kilometers. The models using the nitrogen flux from the Mississippi River to predict the "dead zone" areas have been criticized for being systematically high from 2006 to 2014, having predicted record areas in 2007, 2008, 2009, 2011, and 2013 that were never realized.
In late summer 1988 the dead zone disappeared as the great drought caused the flow of Mississippi to fall to its lowest level since 1933. During times of heavy flooding in the Mississippi River Basin, as in 1993, "the "dead zone" dramatically increased in size, approximately 5,000 km (3,107 mi) larger than the previous year."
Economic Impact of Gulf of Mexico 'Dead Zone'
Some assert that the dead zone threatens lucrative commercial and recreational fisheries in the Gulf of Mexico. "In 2009, the dockside value of commercial fisheries in the Gulf was $629 million. Nearly three million recreational fishers further contributed about $10 billion to the Gulf economy, taking 22 million fishing trips." Scientists are not in universal agreement that nutrient loading has a negative impact on fisheries. Grimes makes a case that nutrient loading enhances the fisheries in the Gulf of Mexico. Courtney et al. assert that nutrient loading has made significant contributions to the increases in red snapper production in the northern and western Gulf of Mexico.
History of Gulf of Mexico 'Dead Zone'
Shrimp trawlers first reported a 'dead zone' in the Gulf of Mexico in 1950 but it wasn't until 1970 when the size of the hypoxic zone had increased that scientists began to investigate.
The conversion of forests and wetlands for agricultural and urban developments accelerated after 1950. "Missouri River Basin has had hundreds of thousands of acres of forests and wetlands (66 000 000acres) replaced with agriculture activity [. . .] In the Lower Mississippi one third of the valley's forests were converted to agriculture between 1950 and 1976."
Energy Independence and Security Act of 2007
The Energy Independence and Security Act of 2007 calls for the production of 36 billion US gallons (140,000,000 m3) of renewable fuels by 2022, including 15 billion US gallons (57,000,000 m3) of corn-based ethanol, a tripling of current production that would require a similar increase in corn production. Unfortunately, the plan poses a new problem; the increase in demand for corn production results in a proportional increase in nitrogen runoff. Although nitrogen, which makes up 78% of the Earth's atmosphere, is an inert gas, it has more reactive forms, one of which is used to make fertilizer.
According to Fred Below, a professor of crop physiology at the University of Illinois at Urbana-Champaign, corn requires more nitrogen-based fertilizer because it produces a higher grain per unit area than other crops and, unlike other crops, corn is completely dependent on available nitrogen in soil. The results, reported 18 March 2008 in Proceedings of the National Academy of Sciences, showed that scaling up corn production to meet the 15-billion-US-gallon (57,000,000 m3) goal would increase nitrogen loading in the Dead Zone by 10–18%. This would boost nitrogen levels to twice the level recommended by the Mississippi Basin/Gulf of Mexico Water Nutrient Task Force (Mississippi River Watershed Conservation Programs), a coalition of federal, state, and tribal agencies that has monitored the dead zone since 1997. The task force says a 30% reduction of nitrogen runoff is needed if the dead zone is to shrink.
Dead zones are reversible, though the extinction of organisms that are lost due to its appearance is not. The Black Sea dead zone, previously the largest in the world, largely disappeared between 1991 and 2001 after fertilizers became too costly to use following the collapse of the Soviet Union and the demise of centrally planned economies in Eastern and Central Europe. Fishing has again become a major economic activity in the region.
While the Black Sea "cleanup" was largely unintentional and involved a drop in hard-to-control fertilizer usage, the U.N. has advocated other cleanups by reducing large industrial emissions. From 1985 to 2000, the North Sea dead zone had nitrogen reduced by 37% when policy efforts by countries on the Rhine River reduced sewage and industrial emissions of nitrogen into the water. Other cleanups have taken place along the Hudson River and San Francisco Bay.
- Aquatic Dead Zones NASA Earth Observatory. Revised 17 July 2010. Retrieved 17 January 2010.
- "NOAA: Gulf of Mexico 'dead zone' predictions feature uncertainty". National Oceanic and Atmospheric Administration (NOAA). June 21, 2012. Retrieved June 23, 2012.
- David Perlman, Chronicle Science Editor (2008-08-15). "Scientists alarmed by ocean dead-zone growth". Sfgate.com. Retrieved 2010-08-03.
- Diaz, R. J.; Rosenberg, R. (2008-08-15). "Spreading Dead Zones and Consequences for Marine Ecosystems". Science 321 (5891): 926–9. doi:10.1126/science.1156401. PMID 18703733.
- "Blooming horrible: Nutrient pollution is a growing problem all along the Mississippi". The Economist. Retrieved June 23, 2012.
- David W. Schindler; John R. Vallentyne (2008). The Algal Bowl: Overfertilization of the World's Freshwaters and Estuaries. Edmonton, Alberta: University of Alberta Press. Retrieved June 23, 2012.
- "Whole Lake Experiment, Ford Lake, Prof Lehman"
- Corn boom could expand 'dead zone' in Gulf
- Mora, C.; et al. (2013). "Biotic and Human Vulnerability to Projected Changes in Ocean Biogeochemistry over the 21st Century". PLOS Biology 11: e1001682. doi:10.1371/journal.pbio.1001682. PMC 3797030. PMID 24143135.
- Diaz, R. J.; Rutger Rosenberg (August 15, 2008). "Supporting Online Material for Spreading Dead Zones and Consequences for Marine Ecosystems" (PDF). Science 321 (926): 926–9. doi:10.1126/science.1156401. PMID 18703733. Retrieved 2010-08-13.
- "Dead Zones".
- "Will "Dead Zones" Spread in the St. Lawrence River?".
- Griffis, R. and Howard, J. [Eds.]. 2013. Oceans and Marine Resources in a Changing Climate: A Technical Input to the 2013 National Climate Assessment. Washingtonn, DC: Island Press
- "NOAA: Gulf of Mexico 'Dead Zone' Predictions Feature Uncertainty". U.S. Geological Survey (USGS). June 21, 2012. Retrieved June 23, 2012.
- "What is hypoxia?". Louisiana Universities Marine Consortium (LUMCON). Retrieved May 18, 2013.
- "Dead Zone: Hypoxia in the Gulf of Mexico" (PDF). NOAA. 2009. Retrieved June 23, 2012.
- Lochhead, Carolyn (2010-07-06). "Dead zone in gulf linked to ethanol production". San Francisco Chronicle. Retrieved 2010-07-28.
- Courtney et al. Predictions Wrong Again on Dead Zone Area - Gulf of Mexico Gaining Resistance to Nutrient Loading. http://arxiv.org/ftp/arxiv/papers/1307/1307.8064.pdf
- Lisa M. Fairchild (2005). The influence of stakeholder groups on the decision making process regarding the dead zone associated with the Mississippi river discharge (Master of Science). University of South Florida (USF). p. 14.
- Grimes, C. B. Fishery production and the Mississippi River discharge. Fisheries (2001) 26(8), 17-26.
- Courtney et al. Nutrient Loading Increases Red Snapper Production in the Gulf of Mexico. http://hy-ls.org/index.php/hyls/article/view/100/87
- Jennie Biewald; Annie Rossetti; Joseph Stevens; Wei Cheih Wong. The Gulf of Mexico's Hypoxic Zone (Report).
- Cox, Tony (2007-07-23). "Exclusive". Bloomberg.com. Retrieved 2010-08-03.
- Potera, Carol (June 2008). "Corn Ethanol Goal Revives Dead Zone Concerns". Environmental Health Prospectives.
- "Dead Water". Economist. May 2008.
- Mee, Laurence (November 2006). "Reviving Dead Zones". Scientific American.
- 'Dead Zones' Multiplying In World's Oceans by John Nielsen. 15 Aug 2008, Morning Edition, NPR.
- "Wisconsin Department of Natural Resources" (PDF). Retrieved 2010-08-03.
- Diaz, R.J.; Rosenberg, R. (2008). "Spreading dead zones and consequences for marine ecosystems". Science 321 (5891): 926–929. doi:10.1126/science.1156401. PMID 18703733.
- Osterman, L.E., et al. 2004. Reconstructing an 180-yr record of natural and anthropogenic induced hypoxia from the sediments of the Louisiana Continental Shelf. Geological Society of America meeting. Nov. 7-10. Denver. Abstract.
- Pickard, G.L. and Emery, W.J. 1982. Description Physical Oceanography: An Introduction. Pergamon Press, Oxford, 249 pp.
- Landry, C.A., S. Manning, and A.O. Cheek. 2004. Hypoxia suppresses reproduction in Gulf killifish, Fundulus grandis. e.hormone 2004 conference. Oct. 27-30. New Orleans.
- Johanning, K., et al. 2004. Assessment of molecular interaction between low oxygen and estrogen in fish cell culture. Fourth SETAC World Congress, 25th Annual Meeting in North America. Nov. 14-18. Portland, Ore. Abstract.
- Taylor, F.J.; Taylor, N.J.; Walsby, J.R. (1985). "A bloom of planktonic diatom Ceratulina pelagica off the coastal northeastern New Zealand in 1983, and its contribution to an associated mortality of fish and benthic fauna". Intertional Revue ges. Hydrobiol 70: 773–795. doi:10.1002/iroh.19850700602.
- Morrisey, D.J. (2000). "Predicting impacts and recovery of marine farm sites in Stewart Island New Zealand, from the Findlay-Watling model". Aquaculture 185: 257–271. doi:10.1016/s0044-8486(99)00360-9.
- Potera, C (2008). "Corn Ethanol Goal Revives Dead Zone Concerns". Environmental Health Perspectives 116 (6): A242–A243. doi:10.1289/ehp.116-a242.
- David Stauth (Oregon State University), "Hypoxic "dead zone" growing off the Oregon Coast" July 31, 2006
- Suzie Greenhalgh and Amanda Sauer (WRI), "Awakening the 'Dead Zone': An investment for agriculture, water quality, and climate change" 2003
- NutrientNet, an online nutrient trading tool developed by the World Resources Institute, designed to address issues of eutrophication. See also the PA NutrientNet website designed for Pennsylvania's nutrient trading program.
- Reyes Tirado (July 2008) Dead Zones: How Agricultural Fertilizers are Killing our Rivers, Lakes and Oceans. Greenpeace publications. See also: "Dead Zones: How Agricultural Fertilizers are Killing our Rivers, Lakes and Oceans". Greenpeace Canada. 2008-07-07. Retrieved 2010-08-03.
- MSNBC report on dead zones, March 29, 2004
- Joel Achenbach, "A 'Dead Zone' in The Gulf of Mexico: Scientists Say Area That Cannot Support Some Marine Life Is Near Record Size", Washington Post, July 31, 2008
- Joel Achenbach, "'Dead Zones' Appear In Waters Worldwide: New Study Estimates More Than 400", Washington Post, August 15, 2008
- Louisiana Universities Marine Consortium
- UN Geo Yearbook 2003 report on nitrogen and dead zones
- NASA on dead zones (Satellite pictures)
- Gulf of Mexico Dead Zone - multimedia
- Gulf of Mexico Hypoxia Watch, NOAA Joel Achenbach | https://en.wikipedia.org/wiki/Dead_zone_(ecology) |
4.09375 | ||This article needs attention from an expert in Psychology. The specific problem is: There are several sections with few or no sources. (May 2014)|
Shyness (also called diffidence) is the feeling of apprehension, lack of comfort, or awkwardness especially when a person is in proximity to other people. This commonly occurs in new situations or with unfamiliar people. Shyness can be a characteristic of people who have low self-esteem. Stronger forms of shyness are usually referred to as social anxiety or social phobia.
The primary defining characteristic of shyness is a largely ego-driven fear of what other people will think of a person's behavior. This results in a person becoming scared of doing or saying what he or she wants to out of fear of negative reactions, being laughed at or humiliated, criticism, and/or rejection. A shy person may simply opt to avoid social situations instead.
One important aspect of shyness is social skills development. Schools and parents may implicitly assume children are fully capable of effective social interaction. Social skills training is not given any priority (unlike reading and writing) and as a result, shy students are not given an opportunity to develop their ability to participate in class and interact with peers. Teachers can model social skills and ask questions in a less direct and intimidating manner in order to gently encourage shy students to speak up in class, and make friends with other children.
- 1 Origins
- 2 Personality trait
- 3 Concepts
- 4 Misconceptions and negative aspects
- 5 Benefits
- 6 Different cultural views
- 7 Intervention and treatment
- 8 See also
- 9 References
- 10 Bibliography
- 11 External links
The initial cause of shyness varies. Scientists believe that they have located genetic data supporting the hypothesis that shyness is, at least, partially genetic. However, there is also evidence that suggests the environment in which a person is raised can also be responsible for his or her shyness. This includes child abuse, particularly emotional abuse such as ridicule. Shyness can originate after a person has experienced a physical anxiety reaction; at other times, shyness seems to develop first and then later causes physical symptoms of anxiety. Shyness differs from social anxiety, which is a broader, often depression-related psychological condition including the experience of fear, apprehension or worrying about being evaluated by others in social situations to the extent of inducing panic.
Shyness may come from genetic traits, the environment in which a person is raised and personal experiences. Shyness may merely be a personality trait or can occur at certain stages of development in children.
Genetics and heredity
Shyness is often seen as a hindrance on people and their development. The cause of shyness is often disputed but it is found that fear is positively related to shyness, suggesting that fearful children are much more likely to develop being shy as opposed to less fearful children. Shyness can also be seen on a biological level as a result of an excess of cortisol. When cortisol is present in greater quantities it is known to suppress an individual’s immune system, making them more susceptible to illness and disease. The genetics of shyness is a relatively small area of research that has been receiving an even smaller amount of attention, although papers on the biological bases of shyness date back to 1988. Some research has indicated that shyness and aggression are related—through long and short forms of the gene DRD4, though considerably more research on this is needed. Further, it has been suggested that shyness and social phobia (the distinction between the two is becoming ever more blurred) are related to obsessive-compulsive disorder. As with other studies of behavioral genetics, the study of shyness is complicated by the number of genes involved in, and the confusion in defining, the phenotype. Naming the phenotype – and translation of terms between genetics and psychology — also causes problems.
Several genetic links to shyness are current areas of research. One is the serotonin transporter promoter region polymorphism (5-HTTLPR), the long form of which has been shown to be modestly correlated with shyness in grade school children. Previous studies had shown a connection between this form of the gene and both obsessive-compulsive disorder and autism. Mouse models have also been used, to derive genes suitable for further study in humans; one such gene, the glutamic acid decarboxylase gene (which encodes an enzyme that functions in GABA synthesis), has so far been shown to have some association with behavioral inhibition.
Another gene, the dopamine D4 receptor gene (DRD4) exon III polymorphism, had been the subject of studies in both shyness and aggression, and is currently the subject of studies on the "novelty seeking" trait. A 1996 study of anxiety-related traits (shyness being one of these) remarked that, "Although twin studies have indicated that individual variation in measures of anxiety-related personality traits is 40-60% heritable, none of the relevant genes has yet been identified," and that "10 to 15 genes might be predicted to be involved" in the anxiety trait. Progress has been made since then, especially in identifying other potential genes involved in personality traits, but there has been little progress made towards confirming these relationships. The long version of the 5-HTT gene-linked polymorphic region (5-HTTLPR) is now postulated to be correlated with shyness, but in the 1996 study, the short version was shown to be related to anxiety-based traits.
As a symptom of mercury poisoning
Excessive shyness, embarrassment, self-consciousness and timidity, social-phobia and lack of self-confidence are also components of erethism, which is a symptom complex that appears in cases of mercury poisoning. Mercury poisoning was common among hat makers in England in the 18th and 19th centuries, who used mercury to stabilize wool into felt fabric.
The prevalence of shyness in some children can be linked to day length during pregnancy, particularly during the midpoint of prenatal development. An analysis of longitudinal data from children living at specific latitudes in the United States and New Zealand revealed a significant relationship between hours of day length during the midpoint of pregnancy and the prevalence of shyness in children. "The odds of being classified as shy were 1.52 times greater for children exposed to shorter compared to longer daylengths during gestation." In their analysis, scientists assigned conception dates to the children relative to their known birth dates, which allowed them to obtain random samples from children who had a mid-gestation point during the longest hours of the year and the shortest hours of the year (June and December, depending on whether the cohorts were in the United States or New Zealand).
The longitudinal survey data included measurements of shyness on a five-point scale based on interviews with the families being surveyed, and children in the top 25th percentile of shyness scores were identified. The data revealed a significant co-variance between the children who presented as being consistently shy over a two-year period, and shorter day length during their mid-prenatal development period. "Taken together, these estimates indicate that about one out of five cases of extreme shyness in children can be associated with gestation during months of limited daylength."
Low birth weights
In recent years correlations between birth weight and shyness have been studied. Findings suggest that those born at low birth weights are more likely to be shy, risk-aversive and cautious compared to those born at normal birth weights. These results do not however imply a cause-and-effect relationship.
Shyness is most likely to occur during unfamiliar situations, though in severe cases it may hinder an individual in his or her most familiar situations and relationships as well. Shy people avoid the objects of their apprehension in order to keep from feeling uncomfortable and inept; thus, the situations remain unfamiliar and the shyness perpetuates itself. Shyness may fade with time; e.g., a child who is shy towards strangers may eventually lose this trait when older and become more socially adept. This often occurs by adolescence or young adulthood (generally around the age of 13). In some cases, though, it may become an integrated, lifelong character trait. Longitudinal data suggests that the three different personality types evident in infancy easy, slow-to-warm-up, and difficult tend to change as children mature. Extreme traits become less pronounced, and personalities evolve in predictable patterns over time. What has been proven to remain constant is the tendency to internalize or externalize problems. This relates to individuals with shy personalities because they tend to internalize their problems, or dwell on their problems internally instead of expressing their concerns, which leads to disorders like depression and anxiety. Humans experience shyness to different degrees and in different areas.
Shyness can also be seen as an academic determinant. It has been determined that there is a negative relationship between shyness and classroom performance. As the shyness of an individual increased, classroom performance was seen to decrease.
Shyness may involve the discomfort of difficulty in knowing what to say in social situations, or may include crippling physical manifestations of uneasiness. Shyness usually involves a combination of both symptoms, and may be quite devastating for the sufferer, in many cases leading them to feel that they are boring, or exhibit bizarre behavior in an attempt to create interest, alienating them further. Behavioral traits in social situations such as smiling, easily producing suitable conversational topics, assuming a relaxed posture and making good eye contact, may not be second nature for a shy person. Such people might only affect such traits by great difficulty, or they may even be impossible to display.
Those who are shy are perceived more negatively, in cultures that value sociability, because of the way they act towards others. Shy individuals are often distant during conversations, which can result in others to forming poor impressions of them. People who are not shy may be up-front, aggressive, or critical towards shy people in an attempt "to get them out of their shell." This can actually make a shy person feel worse, as it draws attention to them, making them more self-conscious and uncomfortable.
Shyness vs. introversion
The term shyness may be implemented as a lay blanket-term for a family of related and partially overlapping afflictions, including timidity (apprehension in meeting new people), bashfulness and diffidence (reluctance in asserting oneself), apprehension and anticipation (general fear of potential interaction), or intimidation (relating to the object of fear rather than one's low confidence). Apparent shyness, as perceived by others, may simply be the manifestation of reservation or introversion, character traits which cause an individual to voluntarily avoid excessive social contact or be terse in communication, but are not motivated or accompanied by discomfort, apprehension, or lack of confidence.
Rather, according to professor of psychology Bernardo J. Carducci, introverts choose to avoid social situations because they derive no reward from them or may find surplus sensory input overwhelming, whereas shy people may fear such situations. Research using the statistical techniques of factor analysis and correlation have found shyness overlaps mildly with both introversion and neuroticism (i.e., negative emotionality). Low societal acceptance of shyness or introversion may reinforce a shy or introverted individual's low self-confidence.
Both shyness and introversion can outwardly manifest with socially withdrawn behaviors, such as tendencies to avoid social situations, especially when they are unfamiliar. A variety of research suggests that shyness and introversion possess clearly distinct motivational forces and lead to uniquely different personal and peer reactions and therefore cannot be described as theoretically the same, with Susan Cain's Quiet (2012) further discerning introversion as involving being differently social (preferring one-on-one or small group interactions) rather than being anti-social altogether.
Research suggests that no unique physiological response, such as an increased heart beat, accompanies socially withdrawn behavior in familiar compared with unfamiliar social situations. But unsociability leads to decreased exposure to unfamiliar social situations and shyness causes a lack of response in such situations, suggesting that shyness and unsociability affect two different aspects of sociability and are distinct personality traits. In addition, different cultures perceive unsociability and shyness in different ways, leading to either positive or negative individual feelings of self-esteem. Collectivist cultures view shyness as a more positive trait related to compliance with group ideals and self-control, while perceiving chosen isolation (introverted behavior) negatively as a threat to group harmony; and because collectivist society accepts shyness and rejects unsociability, shy individuals develop higher self-esteem than introverted individuals. On the other hand, individualistic cultures perceive shyness as a weakness and a character flaw, while unsociable personality traits (preference to spend time alone) are accepted because they uphold the value of autonomy; accordingly, shy individuals tend to develop low self-esteem in Western cultures while unsociable individuals develop high self-esteem.
An extreme case of shyness is identified as a psychiatric illness, which made its debut as social phobia in DSM-III in 1980, but was then described as rare. By 1994, however, when DSM-IV was published, it was given a second, alternative name in parentheses (social anxiety disorder) and was now said to be relatively common, affecting between 3 and 13% of the population at some point during their lifetime. Studies examining shy adolescents and university students found that between 12 and 18% of shy individuals meet criteria for social anxiety disorder.
Shyness affects people mildly in unfamiliar social situations where one feels anxiety about interacting with new people. Social anxiety disorder, on the other hand, is a strong irrational fear of interacting with people, or being in situations which may involve public scrutiny, because one feels overly concerned about being criticized if one embarrasses oneself. Physical symptoms of social phobia can include shortness of breath, trembling, increased heart rate, and sweating; in some cases, these symptoms are intense enough and numerous enough to constitute a panic attack. Shyness, on the other hand, may incorporate many of these symptoms, but at a lower intensity, infrequently, and does not interfere tremendously with normal living.
Social inhibition vs. behavioral inhibition
Those considered shy are also said to be socially inhibited. Social inhibition is the conscious or unconscious constraint by a person of behavior of a social nature. In other words, social inhibition is holding back for social reasons. There are different levels of social inhibition, from mild to severe. Being socially inhibited is good when preventing one from harming another and bad when causing one to refrain from participating in class discussions.
Behavioral inhibition is a temperament or personality style that predisposes a person to become fearful, distressed and withdrawn in novel situations. This personality style is associated with the development of anxiety disorders in adulthood, particularly social anxiety disorder.
Misconceptions and negative aspects
Many misconceptions/stereotypes about shy individuals exist in western culture and negative peer reactions to "shy" behavior abound. This takes place because individualistic cultures place less value on quietness and meekness in social situations, and more often reward outgoing behaviors. Some misconceptions include viewing introversion and social phobia synonymous with shyness, and believing that shy people are less intelligent.
No correlation (positive or negative) exists between intelligence and shyness. Research indicates that shy children have a harder time expressing their knowledge in social situations (which most modern curricula utilize) and because they do not engage actively in discussions, teachers view them as less intelligent. In line with social learning theory, an unwillingness to engage with classmates and teachers makes it more difficult for shy students to learn. Test scores, however, indicate that shyness is unrelated to actual academic knowledge, and therefore only academic engagement. Depending on the level of a teacher's own shyness, more indirect (vs. socially oriented) strategies are used with shy individuals to assess knowledge in the classroom, and accommodations are made. Observed peer evaluations of shy people during initial meeting and social interactions thereafter found that peers evaluate shy individuals as less intelligent during the first encounter. During subsequent interactions, however, peers perceived shy individuals' intelligence more positively.
Thomas Benton claims that because shy people "have a tendency toward self-criticism, they are often high achievers, and not just in solitary activities like research and writing. Perhaps even more than the drive toward independent achievement, shy people long to make connections to others often through altruistic behavior." Susan Cain describes the benefits that shy people bring to society that US cultural norms devalue. Without characteristics that shy people bring to social interactions, such as sensitivity to the emotions of others, contemplation of ideas, and valuable listening skills, there would be no balance to society. In earlier generations, such as the 1950s, society perceived shyness as a more socially attractive trait, especially in women, indicating that views on shyness vary by culture.
Sociologist Susie Scott challenged the interpretation and treatment of shyness as being pathological. "By treating shyness as an individual pathology, ... we forget that this is also a socially oriented state of mind that is socially produced and managed." She explores the idea that "shyness is a form of deviance: a problem for society as much as for the individual", and concludes that, to some extent, "we are all impostors, faking our way through social life". One of her interview subjects (self-defined as shy) puts this point of view even more strongly: "Sometimes I want to take my cue from the militant disabled lobbyists and say, 'hey, it's not MY problem, it's society's'. I want to be proud to be shy: on the whole, shys are probably more sensitive, and nicer people, than 'normals'. I shouldn't have to change: society should adapt to meet my needs."
Different cultural views
In cultures that value outspokenness and overt confidence, shyness can be perceived as weakness. To an unsympathetic observer, a shy individual may be mistaken as cold, distant, arrogant or aloof, which can be frustrating for the shy individual. However, in other cultures, shy people may be perceived as being thoughtful, intelligent, as being good listeners, and as being more likely to think before they speak.
In cultures that value autonomy, shyness is often analyzed in the context of being a social dysfunction, and is frequently contemplated as a personality disorder or mental health issue. Some researchers are beginning to study comparisons between individualistic and collectivistic cultures, to examine the role that shyness might play in matters of social etiquette and achieving group-oriented goals. "Shyness is one of the emotions that may serve as behavioral regulators of social relationships in collectivistic cultures. For example, social shyness is evaluated more positively in a collectivistic society, but negatively evaluated in an individualistic society."
In a cross-cultural study of Chinese and Canadian school children, researchers sought to measure several variables related to social reputation and peer relationships, including "shyness-sensitivity." Using peer nomination questionnaire, students evaluated their fellow students using positive and negative playmate nominations. "Shyness-sensitivity was significantly and negatively correlated with measures of peer acceptance in the Canadian sample. Inconsistent with Western results, it was found that items describing shyness-sensitivity were separated from items assessing isolation in the factor structure for the Chinese sample. Shyness-sensitivity was positively associated with sociability-leadership and with peer acceptance in the Chinese sample."
Perceptions of Western cultures
In some Western cultures shyness-inhibition plays an important role in psychological and social adjustment. It has been found that shyness-inhibition is associated with a variety of maladaptive behaviors. Being shy or inhibited in Western cultures can result in rejection by peers, isolation and being viewed as socially incompetent by adults. However, research suggests that if social withdrawal is seen as a personal choice rather than the result of shyness, there are fewer negative connotations.
British writer Arthur C. Benson felt shyness is not mere self-consciousness, but a primitive suspicion of strangers, the primeval belief that their motives are predatory, with shyness a sinister quality which needs to be uprooted. He believed the remedy is for the shy to frequent society for courage from familiarity. Also, he claimed that too many shy adults take refuge in a critical attitude, engaging in brutal onslaughts on inoffensive persons. He felt that a better way is for the shy to be nice, to wonder what others need and like, interest in what others do or are talking about, friendly questions, and sympathy.
For Charles Darwin shyness was an ‘odd state of mind’ appearing to offer no benefit to our species, and since the 1970s the modern tendency in psychology has been to see shyness as pathology. However, evolutionary survival advantages of careful temperaments over adventurous temperaments in dangerous environments have also been recognized.
Perceptions of Eastern cultures
In Eastern cultures shyness-inhibition in school-aged children is seen as positive and those that exhibit these traits are viewed well by peers and are accepted. They tended to be seen as competent by their teachers, to perform well in school and to show well-being. Shy individuals are also more likely to attain leadership status in school. Being shy or inhibited does not correlate with loneliness or depression as those in the West do. In Eastern cultures being shy and inhibited is a sign of politeness, respectfulness, and thoughtfulness.
Examples of cultural views on shyness and inhibition
In Hispanic cultures shyness and inhibition with authority figures is common. For instance, Hispanic students may feel shy towards being praised by teachers in front of others, because in these cultures students are rewarded in private with a touch, a smile, or spoken word of praise. Hispanic students may seem shy when they are not. It is considered rude to excel over peers and siblings; therefore it is common for Hispanic students to be reserved in classroom settings. Adults also show reluctance to share personal matters about themselves to authority figures such as nurses and doctors.
Cultures in which the community is closed and based on agriculture (Kenya, India, etc.) experience lower social engagement than those in more open communities (United States, Okinawa, etc.) where interactions with peers is encouraged. Children in Mayan, Indian, Mexican, and Kenyan cultures are less expressive in social styles during interactions and they spend little time engaged in socio-dramatic activities. They are also less assertive in social situations. Self-expression and assertiveness in social interactions are related to shyness and inhibition in that when one is shy or inhibited he or she exhibits little or no expressive tendencies. Assertiveness is demonstrated in the same way, being shy and inhibited lessen one's chances of being assertive because of a lack of confidence.
In the Italian culture emotional expressiveness during interpersonal interaction is encouraged. From a young age children engage in debates or discussions that encourage and strengthen social assertiveness. Independence and social competence during childhood is also promoted. Being inhibited is looked down upon and those who show this characteristic are viewed negatively by their parents and peers. Like other cultures where shyness and inhibition is viewed negatively, peers of shy and inhibited Italian children reject the socially fearful, cautious and withdrawn. These withdrawn and socially fearful children express loneliness and believe themselves to be lacking the social skills needed in social interactions.
Intervention and treatment
Psychological methods and pharmaceutical drugs are commonly used to treat shyness in individuals who feel crippled because of low self-esteem and psychological symptoms, such as depression or loneliness. According to research, early intervention methods that expose shy children to social interactions involving team work, especially team sports, decrease their anxiety in social interactions and increase their all around self-confidence later on. Implementing such tactics could prove to be an important step in combating the psychological effects of shyness that make living normal life difficult for anxious individuals.
- People skills
- Social anxiety
- Social phobia
- Selective mutism
- Avoidant personality disorder
- Highly sensitive person
- Medicalization of behaviors as illness
- "Shyness and social phobia". Royal College of Psychiatrists. 2012. Retrieved 17 January 2014.
- Coplan, R. J.; Arbeau, K. A. (2008). "The Stresses of a "Brave New World": Shyness and School Adjustment in Kindergarten". Journal of Research in Childhood Education 22 (4): 377. doi:10.1080/02568540809594634.
- Eggum, Natalie; Eisenberg, Nancy; Spinrad, Tracy; Reiser, Mark; Gaertner, Bridget; Sallquist, Julie; Smith, Cynthia (2009). "Development of Shyness: Relations with Children's Fearfulness, Sex, and Maternal Behavior". Infancy 14 (3): 325–345. doi:10.1080/15250000902839971. PMC 2791465. PMID 20011459.
- Chung, Joanna Y.Y.; Evans, Mary Ann (2000). "Shyness and symptoms of illness in young children". Canadian Journal of Behavioural Science/Revue canadienne des sciences du comportement 32: 49. doi:10.1037/h0087100.
- Arbelle, Shoshana; Benjamin, Jonathan; Golin, Moshe; Kremer, Ilana; Belmaker, Robert H.; Ebstein, Richard P. (April 2003). "Relation of shyness in grade school children to the genotype for the long form of the serotonin transporter promoter region polymorphism". American Journal of Psychiatry 160 (4): 671–676. doi:10.1176/appi.ajp.160.4.671. PMID 12668354.
- Brune, CW; Kim, SJ; Salt, J; Leventhal, BL; Lord, C; Cook Jr, EH (2006). "5-HTTLPR Genotype-Specific Phenotype in Children and Adolescents with Autism". The American Journal of Psychiatry 163 (12): 2148–56. doi:10.1176/appi.ajp.163.12.2148. PMID 17151167.
- Smoller, Jordan W.; Rosenbaum, Jerold F.; Biederman, Joseph; Susswein, Lisa S.; Kennedy, John; Kagan, Jerome; Snidman, Nancy; Laird, Nan; Tsuang, Ming T.; Faraone, Stephen V.; Schwarz, Alysandra; Slaugenhaupt, Susan A. (2001). "Genetic association analysis of behavioral inhibition using candidate loci from mouse models". American Journal of Medical Genetics 105 (3): 226–235. doi:10.1002/ajmg.1328. PMID 11353440.
- Lesch et al. 1996.
- WHO (1976) Environmental Health Criteria 1: Mercury, Geneva, World Health Organization, 131 pp.
- WHO. Inorganic mercury. Environmental Health Criteria 118. World Health Organization, Geneva, 1991.
- Gortmaker, SL. et al. Daylength during pregnancy and shyness in children: results from northern and southern hemispheres. 1997.
- U.S, News Staff (9 July 2008). "Do Underweight Newborns Make for Shy Adult". Retrieved 14 March 2013.
- Janson, H.; Matheisen, K.S. (2008). "Temperament profiles from infancy to middle childhood: Developmentand associations with behavior problems". Developmental Psychology 44 (5): 1314–1328. doi:10.1037/a0012713.
- Coplan, R. J.; Rose-Krasnor, L.; Weeks, M.; Kingsbury, A.; Kingsbury, M.; Bullock, A. (2012). "Alone is a crowd: Social motivations, social withdrawal, and socioemotional functioning in later childhood". Developmental Psychology. doi:10.1037/a0028861.
- Chisti, Saeed-ul-Hasan; Anwar, Saeed; Babar Khan, Shahinshah (2011). "Relationship between shyness and classroom performance at graduation level in Pakistan". Interdisciplinary Journal of Contemporary Research In Business 3 (4): 532–538.
- Paulhus, D.L.; Morgan, K.L. (1997). "Perceptions of intelligence in leaderless groups: The dynamic effects of shyness and acquaintance". Journal of Personality and Social Psychology 72 (3): 581–591. doi:10.1037/0022-35126.96.36.1991. PMID 9120785.
- "Shy | Define Shy at Dictionary.com". Dictionary.reference.com. Retrieved 2012-08-13.
- Whitten, Meredith (2001-08-21). "All About Shyness". Psych Central. Retrieved 2012-08-13.
- Crazier, W. R. (1979). "Shyness as a dimension of personality". British Journal of Social and Clinical Psychology 18: 121. doi:10.1111/j.2044-8260.1979.tb00314.x.
- Heiser, N. A.; Turner, S. M.; Beidel, D. C. (2003). "Shyness: Relationship to social phobia and other psychiatric disorders". Behaviour research and therapy 41 (2): 209–21. PMID 12547381.
- Shiner, R.; Caspi, A. (2003). "Personality differences in childhood and adolescence: Measurement, development, and consequences". Journal of Child Psychology and Psychiatry 44: 2–32. doi:10.1111/1469-7610.00101. PMID 12553411.
- Susan Cain's Quiet (2012)
- Asendorpf, J.B.; Meier, G.H. (1993). "Personality effects on children's speech in everyday life: Sociability-mediated exposure and shyness-mediated reactivity to social situations". Journal of Personality and Social Psychology 64 (6): 1072–1083. doi:10.1037/0022-35188.8.131.522.
- Chen, X.; Wang, L.; Cao, R. (2011). "Shyness-sensitivity and unsociability in rural Chinese children: Relations with social, school, and psychological adjustment". Child Development 82 (5): 1531–1543. doi:10.1111/j.1467-8624.2011.01616.x.
- Cornish, Audie (interviewer) (30 January 2012). "Quiet, Please: Unleashing 'The Power Of Introverts'". NPR. Archived from the original on 3 March 2012.
- Lane, C. Shyness: How Normal Behavior Became a Sickness. 2007.
- American Psychiatric Association. (2000). Anxiety disorders. In Diagnostic and statistical manual of mental disorders (4th ed., text rev., pp. 450–456). Washington, D.C.: American Psychiatric Association.
- R.E. Stone. Is the American Psychiatric Association in Bed with Big Pharma? 2011.
- Chavira, D. A.; Stein, M. B.; Malcarne, V. L. (2002). "Scrutinizing the relationship between shyness and social phobia". Journal of anxiety disorders 16 (6): 585–98. PMID 12405519.
- Burstein, M; Ameli-Grillon, L; Merikangas, K. R. (2011). "Shyness versus social phobia in US youth". PEDIATRICS 128 (5): 917–25. doi:10.1542/peds.2011-1434. PMC 3208958. PMID 22007009.
- Heiser, N. A.; Turner, S. M.; Beidel, D. C. (2003). "Shyness: Relationship to social phobia and other psychiatric disorders". Behaviour research and therapy 41 (2): 209–21. PMID 12547381.
- "Behavioral Inhibition as a childhood predictor of social anxiety, part 1". Andrew Kukes foundation for social anxiety. Retrieved 26 March 2013.
- Ordoñez-Ortega, A.; Espinosa-Fernandez, L.; Garcia-Lopez, LJ; Muela-Martinez, JA (2013). "Behavioral Inhibition and Relationship with Childhood Anxiety Disorders/Inhibición Conductual y su Relación con los Trastornos de Ansiedad Infantil". Terapia Psicologica 31: 355–362. doi:10.4067/s0718-48082013000300010.
- Hughes, K.; Coplan, R.J. (2010). "Exploring processes linking shyness and academic achievement in childhood. School Psychology Quarterly" 25 (4): 213–222.
- Coplan, J.R.; Hughes, K.; Bosacki, S.; Rose-Krasnor, L. (2011). "Is silence golden? Elementary school teachers' strategies and beliefs regarding hypothetical shy/quiet and exuberant/talkative children". Journal of Educational Psychology 103 (4): 939–951. doi:10.1037/a0024551.
- "All About Shyness". Psych Central.
- Thomas H. Benton (24 May 2004). "Shyness and Academe". The Chronicle of Higher Education. Retrieved 20 October 2013.
- Cain, Susan (25 June 2011). "Shyness: Evolutionary Tactic?". The New York Times. Archived from the original on 16 August 2013.
- Scott 2007, p. 2.
- Scott 2007, pp. 165, 174.
- Scott 2007, p. 164.
- Frijda, N.H., & Mesquita, B. Social roles and functions: A interaction functions of emotion. 1994.
- Chen, X., Rubin, K., Sun, Y. Social Reputation and Peer Relationships in Chinese and Canadian Children: A Cross-Cultural Study. 1992.
- Kenneth H. Rubin and Robert J. Coplan, ed. (2010). "10". The Development of Shyness and Social Withdrawal. New York, NY: The Guilford Press. pp. 213–227. ISBN 978-1-60623-522-5. Retrieved 17 January 2014.
- p. 162, Benson, Arthur C. 1908. Arthur C. Benson At Large Number XI Shyness. Putnam’s Monthly and The Reader, A Magazine of Literature, Art and Life. Volume IV. New Rochelle, New York: G.P. Putnam’s Sons, The Knickerbocker Press.
- pp. 162-165, Benson, Arthur C. 1908. Arthur C. Benson At Large Number XI Shyness. Putnam’s Monthly and The Reader, A Magazine of Literature, Art and Life. Volume IV. New Rochelle, New York: G.P. Putnam’s Sons, The Knickerbocker Press.
- Moran, Joe (17 July 2013). "The crystalline wall". Aeon. Archived from the original on 16 August 2013.
- "How the students' culture effects their behavior". Teaching from a Hispanic perspective a handbook for non-Hispanic adult educators. Retrieved 2 March 2013.
- Rubin, Kenneth; Sheryl A. Hemphill; Xinyin Chen; Paul Hasting (May 2006). "A cross-cultural study of behavioral inhibition in toddlers: East-West-North-South" (PDF). International Journal of Behavioral Development. 3 30 (3): 119–125. doi:10.1177/0165025406066723. Retrieved 22 February 2013.
- Findlay, L.C.; Coplan, R.J. (2008). "Come out and play: Shyness in childhood and the benefits of organized sports participation". Canadian Journal of Behavioural Science 40 (3): 153–161. doi:10.1037/0008-400x.40.3.153.
- Crozier, W. R. (2001). Understanding Shyness: psychological perspectives. Basingstoke: Palgrave. ISBN 0-333-77371-3.
- Keillor, Garrison. "Shy rights: why not pretty soon?". Happy to be Here. London: Faber. pp. 209–216. ISBN 0571146961.
- Kluger, Z.; Siegfried, Z; Ebstein, R. P. (2002). "A meta-analysis of the association between DRD4 polymorphism and novelty seeking". Molecular Psychiatry 7 (7): 712–717. doi:10.1038/sj.mp.4001082. PMID 12192615.
- Lane, Christopher (2008). Shyness: How Normal Behavior Became a Sickness. New Haven: Yale University Press. ISBN 9780300124460.
- Lesch, Klaus-Peter; Bengal, Dietmar; Heils, Armin; Sabol, Sue Z.; Greenberg, Benjamin D.; Petri, Susanne; Benjamin, Jonathan; Muller, Clemens R.; Hamer, Dean H.; Murphy, Dennis L. (1996). "Association of anxiety-related traits with a polymorphism in the serotonin transporter gene regulatory region". Science 274 (5292): 1527–1531. Bibcode:1996Sci...274.1527L. doi:10.1126/science.274.5292.1527. PMID 8929413.
- Miller, Rowland S.; Perlman, Daniel; Brehn, Sharon S. (2007). Intimate Relationships (4th ed.). Boston: McGraw-Hill. p. 430. ISBN 9780072938012.
- Rubin, Kenneth H. (2003). The Friendship Factor. New York: Penguin Paperbacks. ISBN 0142001899.
- Rubin, Kenneth H.; Coplan, Robert J. (2010). The Development of Shyness and Social Withdrawal. New York: Guilford. ISBN 1606235222.
- Scott, Susie (2007). Shyness and Society: the illusion of competence. Basingstoke: Palgrave Macmillan. ISBN 9781403996039.
- Zimbardo, Philip G. (1977). Shyness: what it is, what to do about it. Reading, Mass.: Addison-Wesley. ISBN 9780201550184.
- Media related to Shyness at Wikimedia Commons
- Lynn Henderson and Philip Zimbardo: "Shyness". Entry in Encyclopedia of Mental Health, Academic Press, San Diego, CA (in press)
- Liebowitz Social Anxiety Scale (LSAS-SR)+
- SHY United - Information and support site with articles and community forums / chat room for shy people experience shyness and social anxiety
- Shyness and Social Phobia - information from mental health charity The Royal College of Psychiatrists
- Social Anxiety Anonymous / Social Phobics Anonymous - International network of 12 Step support groups for people suffering from shyness problems and/or social anxiety disorder/social phobia | https://en.wikipedia.org/wiki/Shyness |
4.15625 | October 19, 2012
Causes and Effects of Gender Inequality
Throughout history, countless acts of gender inequality can be identified; the causes of these discriminating accounts can be traced back to different causes. The general morality of the inequity relies on a belief that men are superior to women; because of this idea, women have spent generations suffering under their counterparts. Also, a common expectation is that men tend to be more assertive and absolute because of their biological hormones or instinctive intellect. Another huge origin is sexual discrimination; even in the world today, many women are viewed by men as just sex objects rather than a real human being with standards and morals; due to carnal ideas, some men can also suffer from a gender stereotype. However it may occur, the main causes of gender differences can be traced back to a belief in male dominance, the biological hormones and intelligence of men and women, and sexual themes.
The presumption of male dominance has existed for a lengthy time; in ancient Greece, men ruled the cities while the women had to support the home. Medieval society, much like Greece, was completely dominated by men; according to Sally Smith’s “Women and Power in the Late Medieval English Village: a reconsideration”, “Women carried out the majority of tasks that took place in the medieval house, such as cooking, cleaning and activities associated with child rearing.” During this era, men set a list of laws that prohibited women from marrying without their parent’s consent, owning businesses, owning property unless they were widows, and having part in politics. On the other hand, men had all of these privileges. Women have slowly been able to obtain their rights. In the first half of the twentieth century, there have been few incidents where women got to assume the same roles as men; for example, during World War I, the men of the world went off to war leaving their jobs unoccupied. The only resource for labor that countries could rely on was women; therefore, women were granted jobs that were presumably meant for males. Although they have been given more rights and equality, women still lack fairness in areas such as education, domestic abuse, crime, and lower class value. Cassandra Clifford states in her article “Are Girls still marginalized? Discrimination and Gender Inequality in Today’s Society”, “Woman and girls are abused by their husbands and fathers, young girls are exploited by sex tourism and trafficking, girls in many countries are forced into arranged marriages at early ages. Twice as many women are illiterate as men, due to the large gap in education, and girls are still less likely to get jobs and excel in the work place than boys.” She describes some of the issues that women face today around the world. These issues are what keep society from coming together to form a better world. Today, women have more rights than ever before, but the belief of male has resulted in a never ending convention toward women. This leads to predetermined thought from younger girls that they must become inferior. Clifford states in her article, “Children look first to their own parents for examples and inspiration, therefore when a child see their mother living a life of inequality, the cycle often continues as girls feel there is no alternative for themselves.” When younger girls see their mother or any woman submitting to the standard, they feel they must do the same. An effect on men is that they have to live up to the standard of the superior gender; if they do not meet the general criteria, their confidence may be destroyed or they may be ridiculed by other males. There are many times when a man feels inferior to a woman who has a more masculine job or has a better salary. For example, a man who is a nurse may feel inferior to a woman who is an engineer; society has developed a general routine that keeps the man on... | http://www.studymode.com/essays/Causes-And-Effects-Of-Gender-Inequality-1210229.html |
4.0625 | 2012 marked a new record low for the extent of Arctic sea ice, but apparently that's not a problem. We can just refreeze it!
Reducing carbon dioxide emissions is the key to a lasting solution to 'human-enhanced' climate change, however since governments and industries aren't doing a very good job of meeting reduction goals, strategies to reduce the worst effects of climate change may be needed. Dr. David Keith, a Canadian physicist, climate scientist and public policy expert who teaches at Harvard University, has done extensive research into the field of Solar Radiation Management, which involves different ways of reducing the amount of solar radiation that reaches the Earth's surface.
The concept behind solar radiation management is fairly basic: introduce a substance into the environment that will reflect more sunlight back into space, and the resulting reduction in the amount of sunlight that reaches the surface will cause an immediate temperature drop in the affected region. One method of doing this involves spraying reflective aerosols — tiny drops of liquid about the same size as those that make up clouds, such as sulphur dioxide or titanium dioxide — into the stable stratosphere, where they can persist for years. Similar aerosols injected into any level of the troposphere (the lowest level of the atmosphere, where all weather happens) would quickly get caught up in the turbulent weather that we see every day and would not last long enough to help reduce incoming sunlight.
Would this really work? Studying the effects of volcanic eruptions (which is where they got the idea from in the first place) and using computer model simulations have given scientists plenty of evidence that it will.
Some approaches to solar radiation management have tried to deal with the situation on a global scale, with talk of releasing a million tons of sulphur dioxide into the stratosphere to lower the temperature around the world. However, these ideas have come under criticism, because of the potential for unforeseen consequences. For example, it has been suggested that introducing sulphur dioxide into the stratosphere could destroy the Earth's protective ozone layer, exposing us to dangerous ultraviolet radiation from the Sun.
Dr. Keith and his colleagues suggest that much better results could be achieved, with a minimum of risk, by only using solar radiation management on a regional scale. Therefore, rather than spread the reflective substance across the entire stratosphere, we would only use it over the area that needed it. They used a selected climate model to simulate these regional changes, compared to a uniform global change, and according to CalTech News, "it took five times less solar reduction than in the uniform reflectance models to recover the Arctic sea ice to the extent typical of pre-Industrial years."
Injecting just five metric tons of these reflective aerosols into the Arctic stratosphere could lower solar radiation levels over the Arctic Ocean enough to refreeze it and allow it to remain frozen. Before you get too alarmed by that five metric tons, the latest official figures from the US EPA show that in 1999, industry released over 17 million metric tons of sulphur dioxide into the troposphere.
There are down-sides to the plan, of course.
Likely no surprise to anyone, it is going to cost money. Compared to how much the effects of climate change are projected to cost us, or what the costs of reducing emissions will be, though, it is a drop in the bucket. Dr. Keith, along with Justin McClellan, from the Aurora Flight Science Corporation in Cambridge, Massachusetts, and Jay Apt, from Carnegie Mellon University's Tepper School of Business and Department of Engineering and Public Policy, published a cost-analysis report in the journal Environmental Research Letters, in August of this year.
Their report states that the technology to deliver these materials to the right altitude and location already exist, and by modifying existing aircraft to act as the delivery method, the entire effort of running the program would cost between $5-8 billion per year (depending on the method of delivery), with the majority of that cost going towards buying or producing the sulphur dioxide itself. According to the same report (referencing from the 2007 IPCC report) "the costs of climate damages or of emission mitigation are commonly estimated to be 0.2—2.5% of 2030 global GDP... equivalent to roughly $200B to $2000B per year. Our estimates of the cost of delivering mass to the stratosphere — likely to be the most substantial part of the cost of SRM deployment — are less than 1% of this figure."
[ Related: Arctic's record melt worries scientists ]
So, we can do this, and compared to the alternatives, it is fairly cost effective. However, is this something we should be doing?
From the standpoint of the effect of having sea ice as opposed to not having sea ice, we should choose to have the sea ice. Without it, global temperatures will rise even faster than they are now. When the sea ice is there, it reflects back solar radiation into space and limits the amount of warming there is of the planet. Take that sea ice away and the darker water absorbs a large percentage of the incoming solar radiation. This will not only contribute to more melting of sea ice, but will give a generally warmer atmosphere and as the water warms it will expand, causing further rises in sea level.
There is the risk of destroying the stratospheric ozone layer, especially if these reflective aerosols get into the Antarctic stratospheric clouds that accumulate during the winter, which are the primary cause of the Antarctic ozone hole. These chemicals, in higher concentrations, would enhance the destruction of ozone and make the ozone hole even larger. However, using a regional scale approach would allow us to limit the concentrations of the aerosols, and thus limit the damage they cause.
There's one other problem with this idea, though — a general tendency towards quick fixes.
Peter Mooney, with Ottawa's Etc Group, which monitors the effects of technology and corporate strategies on society and the environment told The National Post, "It's naive to think that once [solar radiation management] becomes a political option that governments won't just take it on and interpret it as they wish. They will always find scientists who will give them the spin that they want."
[ Related: 5 ways rapid warming is changing the Arctic ]
"[We shouldn't be] opening up the back door for politicians to creep out of, claiming that, 'Don't worry folks. We don't need to do anything because we have technological fixes that we can deploy on short notice.'" | https://ca.news.yahoo.com/blogs/geekquinox/record-loss-arctic-sea-ice-no-problem-just-155430301.html |
4 | Extracellular Fluid—The “Internal Environment” About 60 per cent of the adult human body is fluid, mainly a water solution of ions and other substances. Although most of this fluid is inside the cells and is called intracellular fluid, about one third is in the spaces outside the cells and is called extracellular fluid. This extracellular fluid is in constant motion throughout the body. It is transported rapidly in the circulating blood and then mixed between the blood and the tissue fluids by diffusion through the capillary walls. In the extracellular fluid are the ions and nutrients needed by the cells to maintain cell life. Thus, all cells live in essentially the same environment—the extracellular fluid. For this reason, the extracellular fluid is also called the internal environment of the body, or the milieu intérieur, a term introduced more than 100 years ago by the great 19th-century French physiologist Claude Bernard. Cells are capable of living, growing, and performing their special functions as long as the proper concentrations of oxygen, glucose, different ions, amino acids, fatty substances, and other constituents are available in this internal environment. Differences Between Extracellular and Intracellular Fluids. The extracellular fluid contains large amounts of sodium, chloride, and bicarbonate ions plus nutrients for the cells, such as oxygen, glucose, fatty acids, and amino acids. It also contains carbon dioxide that is being transported from the cells to the lungs to be excreted, plus other cellular waste products that are being transported to the kidneys for excretion. The intracellular fluid differs significantly from the extracellular fluid; specifically, it contains large amounts of potassium, magnesium, and phosphate ions instead of the sodium and chloride ions found in the extracellular fluid. Special mechanisms for transporting ions through the cell membranes maintain the ion concentration differences between the extracellular and intracellular fluids. Extracellular Fluid Transport and Mixing System—The Blood Circulatory System Extracellular fluid is transported through all parts of the body in two stages. The first stage is movement of blood through the body in the blood vessels, and the second is movement of fluid between the blood capillaries and the intercellular spaces between the tissue cells. Figure 1–1 shows the overall circulation of blood. All the blood in the circulation traverses the entire circulatory circuit an average of once each minute when the body is at rest and as many as six times each minute when a person is extremely active. As blood passes through the blood capillaries, continual exchange of extracellular fluid also occurs between the plasma portion of the blood and the interstitial fluid that fills the intercellular spaces. This process is shown in Figure 1–2. The walls of the capillaries are permeable to most molecules in the plasma of the blood, with the exception of the large plasma protein molecules. Therefore, large amounts of fluid and its dissolved constituents diffuse back and forth between the blood and the tissue spaces, as shown by the arrows. This process of diffusion is caused by kinetic motion of the molecules in both the plasma and the interstitial fluid. That is, the fluid and dissolved molecules are continually moving and bouncing in all directions within the plasma and the fluid in the intercellular spaces, and also through the capillary pores. Few cells are located more than 50 micrometers from a capillary, which ensures diffusion of almost any substance from the capillary to the cell within a few seconds.Thus, the extracellular fluid everywhere in the body—both that of the plasma and that of the interstitial fluid—is continually being mixed, thereby maintaining almost complete homogeneity of the extracellular fluid throughout the body. Origin of Nutrients in the Extracellular Fluid Respiratory System. Figure 1–1 shows that each time the blood passes through the body, it also flows through the lungs. The blood picks up oxygen in the alveoli, thus acquiring the oxygen needed by the cells. The membrane between the alveoli and the lumen of the pulmonary capillaries, the alveolar membrane, is only 0.4 to 2.0 micrometers thick, and oxygen diffuses by molecular motion through the pores of this membrane into the blood in the same manner that water and ions diffuse through walls of the tissue capillaries. Gastrointestinal Tract. A large portion of the blood pumped by the heart also passes through the walls of the gastrointestinal tract. Here different dissolved nutrients, including carbohydrates, fatty acids, and amino acids, are absorbed from the ingested food into the extracellular fluid of the blood. Liver and Other Organs That Perform Primarily Metabolic Functions. Not all substances absorbed from the gastrointestinal tract can be used in their absorbed form by the cells. The liver changes the chemical compositions of many of these substances to more usable forms, and other tissues of the body—fat cells, gastrointestinal mucosa, kidneys, and endocrine glands—help modify the absorbed substances or store them until they are needed. Musculoskeletal System. Sometimes the question is asked, How does the musculoskeletal system fit into the homeostatic functions of the body? The answer is obvious and simple: Were it not for the muscles, the body could not move to the appropriate place at the appropriate time to obtain the foods required for nutrition. The musculoskeletal system also provides motility for protection against adverse surroundings, without which the entire body, along with its homeostatic mechanisms, could be destroyed instantaneously. Removal of Metabolic End Products Removal of Carbon Dioxide by the Lungs. At the same time that blood picks up oxygen in the lungs, carbon dioxide is released from the blood into the lung alveoli; the respiratory movement of air into and out of the lungs carries the carbon dioxide to the atmosphere. Carbon dioxide is the most abundant of all the end products of metabolism. Kidneys. Passage of the blood through the kidneys removes from the plasma most of the other substances besides carbon dioxide that are not needed by the cells. These substances include different end products of cellular metabolism, such as urea and uric acid; they also include excesses of ions and water from the food that might have accumulated in the extracellular fluid. The kidneys perform their function by first filtering large quantities of plasma through the glomeruli into the tubules and then reabsorbing into the blood those substances needed by the body, such as glucose, amino acids, appropriate amounts of water, and many of the ions. Most of the other substances that are not needed by the body, especially the metabolic end products such as urea, are reabsorbed poorly and pass through the renal tubules into the urine.
The concept of homeostasis was first articulated by the French scientist Claude Bernard (1813-1878) in his studies of the maintenance of stability in the "milieu interior." He said, "All the vital mechanisms, varied as they are, have only one object, that of preserving constant the conditions of life in the internal environment" (from Leçons sur les Phénonèmes de la Vie Commune aux Animaux et aux Végétaux , 1879). The term itself was coined by American physiologist Walter Cannon, author of The Wisdom of the Body (1932). The word comes from the Greek homoios (same, like, resembling) and stasis (to stand, posture). a schematic of homeostasis. Changes in the environment are transduced to cause a change in the level of a regulated substance. This change is detected through measurement and comparison with a coded set-point value. Disparities between the measured value and the set-point value regulate a response mechanism that directly or indirectly influences effector systems at the exterior–interior interface. Homeostatic systems often require fuel, other support mechanisms and interact with other systems. What is Homeostasis? Homeostasis in a general sense refers to stability, balance or equilibrium. Maintaining a stable internal environment requires constant monitoring and adjustments as conditions change. This adjusting of physiological systems within the body is called homeostatic regulation. Homeostatic regulation involves three parts or mechanisms: 1) the receptor , 2) the control center and 3) the effector . The receptor receives information that something in the environment is changing. The control center or integration center receives and processes information from the receptor . And lastly, the effector responds to the commands of the control center by either opposing or enhancing the stimulus. A metaphor to help us understand this process is the operation of a thermostat. The thermostat monitors and controls room temperature. The thermostat is set at a certain temperature that is considered ideal, the set point . The function of the thermostat is to keep the temperature in the room within a few degrees of the set point . If the room is colder than the set point , the thermostat receives information from the thermometer (the receptor ) that it is too cold. The effectors within the thermostat then will turn on the heat to warm up the room. When the room temperature reaches the set point , the receptor receives the information, and the thermostat "tells" the heater to turn off. This also works when it is too hot in the room. The thermostat receives the information and turns on the air conditioner. When the set point temperature is reached, the thermostat turns off the air conditioner. Our bodies control body temperature in a similar way. The brain is the control center, the receptor is our body's temperature sensors, and the effector is our blood vessels and sweat glands in our skin. When we feel heat, the temperature sensors in our skin send the message to our brain. Our brain then sends the message to the sweat glands to increase sweating and increase blood flow to our skin. When we feel cold, the opposite happens. Our brain sends a message to our sweat glands to decrease sweating, decrease blood flow, and begin shivering. This is an ongoing process that continually works to restore and maintain homeostasis. Because the internal and external environment of the body are constantly changing and adjustments must be made continuously to stay at or near the set point, homeostasis can be thought of as a dynamic equilibrium. Positive and Negative Feedback When a change of variable occurs, there are two main types of feedback to which the system reacts: • Negative feedback : a reaction in which the system responds in such a way as to reverse the direction of change. Since this tends to keep things constant, it allows the maintenance of homeostasis. For instance, when the concentration of carbon dioxide in the human body increases, the lungs are signaled to increase their activity and expel more carbon dioxide. Thermoregulation is another example of negative feedback. When body temperature rises (or falls), receptors in the skin and the hypothalamus sense a change, triggering a command from the brain. This command, in turn, effects the correct response, in this case a decrease in body temperature. • Home Heating System Vs. Negative Feedback: When you are home, you set your thermostat to a desired temperature. Let's say today you set it at 70 degrees. The thermometer in the thermostat waits to sense a temperature change either too high above or too far below the 70 degree set point. When this change happens the thermometer will send a message to the "Control Center", or thermostat, Which in turn will then send a message to the furnace to either shut off if the temperature is too high or kick back on if the temperature is too low. In the home-heating example the air temperature is the "NEGATIVE FEEDBACK." When the Control Center receives negative feedback it triggers a chain reaction in order to maintain room temperature. • Positive feedback : a response is to amplify the change in the variable. This has a destabilizing effect, so does not result in homeostasis. Positive feedback is less common in naturally occurring systems than negative feedback, but it has its applications. For example, in nerves, a threshold electric potential triggers the generation of a much larger action potential. Blood clotting and events in childbirth are other types of positive feedback. ' *Harmful Positive Feedback' Although Positive Feedback is needed within Homeostasis it also can be harmful at times. When you have a high fever it causes a metabolic change that can push the fever higher and higher. In rare occurrences the the body temperature reaches 113 degrees the cellular proteins stop working and the metabolism stops, resulting in death. Summary: Sustainable systems require combinations of both kinds of feedback. Generally with the recognition of divergence from the homeostatic condition, positive feedbacks are called into play, whereas once the homeostatic condition is approached, negative feedback is used for "fine tuning" responses. This creates a situation of "metastability," in which homeostatic conditions are maintained within fixed limits, but once these limits are exceeded, the system can shift wildly to a wholly new (and possibly less desirable) situation of homeostasis. Homeostatic systems have several properties • They are ultra-stable, meaning the system is capable of testing which way its variables should be adjusted. • Their whole organization (internal, structural, and functional) contributes to the • Physiology is largely a study of processes related to homeostasis. Some of the functions you will learn about in this book are not specifically about homeostasis (e.g. how muscles contract), but in order for all bodily processes to function there must be a suitable internal environment. Homeostasis is, therefore, a fitting framework for the introductory study of physiology. Pathways That Alter Homeostasis A variety of homeostatic mechanisms maintain the internal environment within tolerable limits. Either homeostasis is maintained through a series of control mechanisms, or the body suffers various illnesses or disease. When the cells in your body begin to malfunction, the homeostatic balance becomes disrupted. Eventually this leads to disease or cell malfunction. Disease and cellular malfunction can be caused in two basic ways: either, deficiency (cells not getting all they need) or toxicity (cells being poisoned by things they do not need). When homeostasis is interrupted in your cells, there are pathways to correct or worsen the problem. In addition to the internal control mechanisms, there are external influences based primarily on lifestyle choices and environmental exposures that influence our body's ability to maintain cellular health. • Nutrition: If your diet is lacking in a specific vitamin or mineral your cells will function poorly, possibly resulting in a disease condition. For example, a menstruating woman with inadequate dietary intake of iron will become anemic. Lack of hemoglobin, a molecule that requires iron, will result in reduced oxygen-carrying capacity. In mild cases symptoms may be vague (e.g. fatigue), but if the anemia is severe the body will try to compensate by increasing cardiac output, leading to palpitations and sweatiness, and possibly to heart failure. • Toxins: Any substance that interferes with cellular function, causing cellular malfunction. This is done through a variety of ways; chemical, plant, insecticides, and or bites. A commonly seen example of this is drug overdoses. When a person takes too much of a drug their vital signs begin to waver; either increasing or decreasing, these vital signs can cause problems including coma, brain damage and even death. • Psychological: Your physical health and mental health are inseparable. Our thoughts and emotions cause chemical changes to take place either for better as with meditation, or worse as with stress. • Physical: Physical maintenance is essential for our cells and bodies. Adequate rest, sunlight, and exercise are examples of physical mechanisms for influencing homeostasis. Lack of sleep is related to a number of ailments such as irregular cardiac rhythms, fatigue, anxiety and headaches. • Genetic/Reproductive: Inheriting strengths and weaknesses can be part of our genetic makeup. Genes are sometimes turned off or on due to external factors which we can have some control over, but at other times little can be done to correct or improve genetic diseases. Beginning at the cellular level a variety of diseases come from mutated genes. For example, cancer can be genetically inherited or can be caused due to a mutation from an external source such as radiation or genes altered in a fetus when the mother uses drugs. • Medical: Because of genetic differences some bodies need help in gaining or maintaining homeostasis. Through modern medicine our bodies can be given different aids -from anti-bodies to help fight infections or chemotherapy to kill harmful cancer cells. Traditional and alternative medical practices have many benefits, but the potential for harmful effects is also present. Whether by nosocomial infections, or wrong dosage of medication, homeostasis can be altered by that which is trying to fix it. Trial and error with medications can cause potential harmful reactions and possibly death if not caught soon enough. The factors listed above all have their effects at the cellular level, whether harmful or beneficial. Inadequate beneficial pathways (deficiency) will almost always result in a harmful waiver in homeostasis. Too much toxicity also causes homeostatic imbalance, resulting in cellular malfunction. By removing negative health influences, and providing adequate positive health influences, your body is better able to self-regulate and self-repair, thus maintaining homeostasis. Control Systems of the Body The human body has thousands of control systems in it. The most intricate of these are the genetic control systems that operate in all cells to help control intracellular function as well as extracellular function. This subject is discussed in Chapter 3. Many other control systems operate within the organs to control functions of the individual parts of the organs; others operate throughout the entire body to control the interrelations between the organs. For instance, the respiratory system, operating in association with the nervous system, regulates the concentration of carbon dioxide in the extracellular fluid. The liver and pancreas regulate the concentration of glucose in the extracellular fluid, and the kidneys regulate concentrations of hydrogen, sodium, potassium, phosphate, and other ions in the extracellular fluid. Examples of Control Mechanisms Regulation of Oxygen and Carbon Dioxide Concentrations in the Extracellular Fluid. Because oxygen is one of the major substances required for chemical reactions in the cells, it is fortunate that the body has a special control mechanism to maintain an almost exact and constant oxygen concentration in the extracellular fluid. This mechanism depends principally on the chemical characteristics of hemoglobin, which is present in all red blood cells. Hemoglobin combines with oxygen as the blood passes through the lungs. Then, as the blood passes through the tissue capillaries, hemoglobin, because of its own strong chemical affinity for oxygen, does not release oxygen into the tissue fluid if too much oxygen is already there. But if the oxygen concentration in the tissue fluid is too low, sufficient oxygen is released to re-establish an adequate concentration. Thus, regulation of oxygen concentration in the tissues is vested principally in the chemical characteristics of hemoglobin itself. This regulation is called the oxygen-buffering function of hemoglobin. Carbon dioxide concentration in the extracellular fluid is regulated in a much different way. Carbon dioxide is a major end product of the oxidative reactions in cells. If all the carbon dioxide formed in the cells continued to accumulate in the tissue fluids, the mass action of the carbon dioxide itself would soon halt all energy-giving reactions of the cells. Fortunately, a higher than normal carbon dioxide concentration in the blood excites the respiratory center, causing a person to breathe rapidly and deeply. This increases expiration of carbon dioxide and, therefore, removes excess carbon dioxide from the blood and tissue fluids. This process continues until the concentration returns to normal. Regulation of Arterial Blood Pressure. Several systems contribute to the regulation of arterial blood pressure. One of these, the baroreceptor system, is a simple and excellent example of a rapidly acting control mechanism. In the walls of the bifurcation region of the carotid arteries in the neck, and also in the arch of the aorta in the thorax, are many nerve receptors called baroreceptors, which are stimulated by stretch of the arterial wall.When the arterial pressure rises too high, the baroreceptors send barrages of nerve impulses to the medulla of the brain. Here these impulses inhibit the vasomotor center, which in turn decreases the number of impulses transmitted from the vasomotor center through the sympathetic nervous system to the heart and blood vessels. Lack of these impulses causes diminished pumping activity by the heart and also dilation of the peripheral blood vessels, allowing increased blood flow through the vessels. Both of these effects decrease the arterial pressure back toward normal. Conversely, a decrease in arterial pressure below normal relaxes the stretch receptors, allowing the vasomotor center to become more active than usual, thereby causing vasoconstriction and increased heart pumping, and raising arterial pressure back toward normal. Normal Ranges and Physical Characteristics of Important Extracellular Fluid Constituents Table 1–1 lists the more important constituents and physical characteristics of extracellular fluid, along with their normal values, normal ranges, and maximum limits without causing death. Note the narrowness of the normal range for each one. Values outside these ranges are usually caused by illness. Most important are the limits beyond which abnormalities can cause death. For example, an increase in the body temperature of only 11°F (7°C) above normal can lead to a vicious cycle of increasing cellular metabolism that destroys the cells. Note also the narrow range for acid-base balance in the body, with a normal pH value of 7.4 and lethal values only about 0.5 on either side of normal. Another important factor is the potassium ion concentration, because whenever it decreases to less than one third normal, a person is likely to be paralyzed as a result of the nerves’ inability to carry signals. Alternatively, if the potassium ion concentration increases to two or more times normal, the heart muscle is likely to be severely depressed. Also, when the calcium ion concentration falls below about one half of normal, a person is likely to experience tetanic contraction of muscles throughout the body because of the spontaneous generation of excess nerve impulses in the peripheral nerves. When the glucose concentration falls below one half of normal, a person frequently develops extreme mental irritability and sometimes even convulsions. These examples should give one an appreciation for the extreme value and even the necessity of the vast numbers of control systems that keep the body operating in health; in the absence of any one of these controls, serious body malfunction or death can result. Characteristics of Control Systems The aforementioned examples of homeostatic control mechanisms are only a few of the many thousands in the body, all of which have certain characteristics in common. These characteristics are explained in this section. Negative Feedback Nature of Most Control Systems Most control systems of the body act by negative feedback, which can best be explained by reviewing some of the homeostatic control systems mentioned previously. In the regulation of carbon dioxide concentration, a high concentration of carbon dioxide in the extracellular fluid increases pulmonary ventilation. This, in turn, decreases the extracellular fluid carbon dioxide concentration because the lungs expire greater amounts of carbon dioxide from the body. In other words, the high concentration of carbon dioxide initiates events that decrease the concentration toward normal, which is negative to the initiating stimulus. Conversely, if the carbon dioxide concentration falls too low, this causes feedback to increase the concentration. This response also is negative to the initiating stimulus. In the arterial pressure–regulating mechanisms, a high pressure causes a series of reactions that promote a lowered pressure, or a low pressure causes a series of reactions that promote an elevated pressure. In both instances, these effects are negative with respect to the initiating stimulus. Therefore, in general, if some factor becomes excessive or deficient, a control system initiates negative feedback, which consists of a series of changes that return the factor toward a certain mean value, thus maintaining homeostasis. “ Gain” of a Control System. The degree of effectiveness with which a control system maintains constant conditions is determined by the gain of the negative feedback. For instance, let us assume that a large volume of blood is transfused into a person whose baroreceptor pressure control system is not functioning, and the arterial pressure rises from the normal level of 100 mm Hg up to 175 mm Hg. Then, let us assume that the same volume of blood is injected into the same person when the baroreceptor system is functioning, and this time the pressure increases only 25 mm Hg. Thus, the feedback control system has caused a “correction” of –50 mm Hg—that is, from 175 mm Hg to 125 mm Hg. There remains an increase in pressure of +25 mm Hg, called the “error,” which means that the control system is not 100 per cent effective in preventing change. The gain of the system is then calculated by the following formula: Thus, in the baroreceptor system example, the correction is –50 mm Hg and the error persisting is +25 mm Hg. Therefore, the gain of the person’s baroreceptor system for control of arterial pressure is –50 divided by +25, or –2. That is, a disturbance that increases or decreases the arterial pressure does so only one third as much as would occur if this control system were not present. The gains of some other physiologic control systems are much greater than that of the baroreceptor system. For instance, the gain of the system controlling internal body temperature when a person is exposed to moderately cold weather is about –33. Therefore, one can see that the temperature control system is much more effective than the baroreceptor pressure control system. Positive Feedback Can Sometimes Cause Vicious Cycles and Death One might ask the question, Why do essentially all control systems of the body operate by negative feedback rather than positive feedback? If one considers the nature of positive feedback, one immediately sees that positive feedback does not lead to stability but to instability and often death. Figure 1–3 shows an example in which death can ensue from positive feedback. This figure depicts the pumping effectiveness of the heart, showing that the heart of a healthy human being pumps about 5 liters of blood per minute. If the person is suddenly bled 2 liters, the amount of blood in the body is decreased to such a low level that not enough blood is available for the heart to pump effectively. As a result, the arterial pressure falls, and the flow of blood to the heart muscle through the coronary vessels diminishes. This results in weakening of the heart, further diminished pumping, a further decrease in coronary blood flow, and still more weakness of the heart; the cycle repeats itself again and again until death occurs. Note that each cycle in the feedback results in further weakening of the heart. In other words, the initiating stimulus causes more of the same, which is positive feedback. Gain = Correction/Error Positive feedback is better known as a “vicious cycle,” but a mild degree of positive feedback can be overcome by the negative feedback control mechanisms of the body, and the vicious cycle fails to develop. For instance, if the person in the aforementioned example were bled only 1 liter instead of 2 liters, the normal negative feedback mechanisms for controlling cardiac output and arterial pressure would overbalance the positive feedback and the person would recover, as shown by the dashed curve of Figure 1–3. Positive Feedback Can Sometimes Be Useful. In some instances, the body uses positive feedback to its advantage. Blood clotting is an example of a valuable use of positive feedback. When a blood vessel is ruptured and a clot begins to form, multiple enzymes called clotting factors are activated within the clot itself. Some of these enzymes act on other unactivated enzymes of the immediately adjacent blood, thus causing more blood clotting. This process continues until the hole in the vessel is plugged and bleeding no longer occurs. On occasion, this mechanism can get out of hand and cause the formation of unwanted clots. In fact, this is what initiates most acute heart attacks, which are caused by a clot beginning on the inside surface of an atherosclerotic plaque in a coronary artery and then growing until the artery is blocked. Childbirth is another instance in which positive feedback plays a valuable role. When uterine contractions become strong enough for the baby’s head to begin pushing through the cervix, stretch of the cervix sends signals through the uterine muscle back to the body of the uterus, causing even more powerful contractions. Thus, the uterine contractions stretch the cervix, and the cervical stretch causes stronger contractions. When this process becomes powerful enough, the baby is born. If it is not powerful enough, the contractions usually die out, and a few days pass before they begin again. Another important use of positive feedback is for the generation of nerve signals. That is, when the membrane of a nerve fiber is stimulated, this causes slight leakage of sodium ions through sodium channels in the nerve membrane to the fiber’s interior. The sodium ions entering the fiber then change the membrane potential, which in turn causes more opening of channels, more change of potential, still more opening of channels, and so forth. Thus, a slight leak becomes an explosion of sodium entering the interior of the nerve fiber, which creates the nerve action potential. This action potential in turn causes electrical current to flow along both the outside and the inside of the fiber and initiates additional action potentials. This process continues again and again until the nerve signal goes all the way to the end of the fiber. In each case in which positive feedback is useful, the positive feedback itself is part of an overall negative feedback process. For example, in the case of blood clotting, the positive feedback clotting process is a negative feedback process for maintenance of normal blood volume. Also, the positive feedback that causes nerve signals allows the nerves to participate in thousands of negative feedback nervous control systems. More Complex Types of Control Systems— Adaptive Control Later in this text, when we study the nervous system, we shall see that this system contains great numbers of interconnected control mechanisms. Some are simple feedback systems similar to those already discussed. Many are not. For instance, some movements of the body occur so rapidly that there is not enough time for nerve signals to travel from the peripheral parts of the body all the way to the brain and then back to the periphery again to control the movement. Therefore, the brain uses a principle called feed-forward control to cause required muscle contractions. That is, sensory nerve signals from the moving parts apprise the brain whether the movement is performed correctly. If not, the brain corrects the feed-forward signals that it sends to the muscles the next time the movement is required. Then, if still further correction is needed, this will be done again for subsequent movements. This is called adaptive control. Adaptive control, in a sense, is delayed negative feedback. Thus, one can see how complex the feedback control systems of the body can be. A person’s life depends on all of them. Therefore, a major share of this text is devoted to discussing these life-giving mechanisms. Summary—Automaticity of the Body The purpose of this chapter has been to point out, first, the overall organization of the body and, second, the means by which the different parts of the body operate in harmony.To summarize, the body is actually a social order of about 100 trillion cells organized into different functional structures, some of which are called organs. Each functional structure contributes its share to the maintenance of homeostatic conditions in the extracellular fluid, which is called the internal environment. As long as normal conditions are maintained in this internal environment, the cells of the body continue to live and function properly. Each cell benefits from homeostasis, and in turn, each cell contributes its share toward the maintenance of homeostasis. This reciprocal interplay provides continuous automaticity of the body until one or more functional systems lose their ability to contribute their share of function.When this happens, all the cells of the body suffer. Extreme dysfunction leads to death; moderate dysfunction leads to sickness.
Figure 49-3 Sympathetic and parasympathetic divisions of the autonomic nervous system. Sympathetic preganglionic neurons are clustered in ganglia in the sympathetic chain alongside the spinal cord extending from the first thoracic spinal segment to upper lumbar segments. Parasympathetic preganglionic neurons are located within the brain stem and in segments S2-S4 of the spinal cord. The major targets of autonomic control are shown here. The Autonomic Nervous System and the Hypothalamus Susan Iversen Leslie Iversen Clifford B. Saper WHEN WE ARE FRIGHTENED our heart races, our breathing becomes rapid and shallow, our mouth becomes dry, our muscles tense, our palms become sweaty, and we may want to run. These bodily changes are mediated by the autonomic nervous system , which controls heart muscle, smooth muscle, and exocrine glands. The autonomic nervous system is distinct from the somatic nervous system , which controls skeletal muscle. As we shall learn in the next chapter, even though the neural control of emotion involves several regions, including the amygdala and the limbic association areas of the cerebral cortex, they all work through the hypothalamus to control the autonomic nervous system. The hypothalamus coordinates behavioral response to insure bodily homeostasis , the constancy of the internal environment. The hypothalamus, in turn, acts on three major systems: the autonomic nervous system, the endocrine system, and an ill-defined neural system concerned with motivation. In this chapter we shall first examine the autonomic nervous system and then go on to consider the hypothalamus. In the next two chapters, we shall examine emotion and motivation, behavioral states that depend greatly on autonomic and hypothalamic mechanisms. The Autonomic Nervous System Is a Visceral and Largely Involuntary Sensory and Motor System In contrast to the somatic sensory and motor systems, which we considered in Parts IV and V of this book, the autonomic nervous system is a visceral sensory and motor system. Virtually all visceral reflexes are mediated by local circuits in the brain stem or spinal cord. Although these reflexes are regulated by a network of central autonomic control nuclei in the brain stem, hypothalamus, and forebrain, these visceral reflexes are not under voluntary control, nor do they impinge on consciousness, with few exceptions. The autonomic nervous system is thus also referred to as the involuntary motor system, in contrast to the voluntary (somatic) motor system. The autonomic nervous system has three major divisions: sympathetic, parasympathetic, and enteric. The sympathetic and parasympathetic divisions innervate cardiac muscle, smooth muscle, and glandular tissues and mediate a variety of visceral reflexes. These two divisions include the sensory neurons associated with spinal and cranial nerves, the preganglionic and postganglionic motor neurons, and the central nervous system circuitry that connects with and modulates the sensory and motor neurons. The enteric division has greater autonomy than the other two divisions and comprises a largely self-contained system, with only minimal connections to the rest of the nervous system. It consists of sensory and motor neurons in the gastrointestinal tract that mediate digestive reflexes. The American physiologist Walter B. Cannon first proposed that the sympathetic and parasympathetic divisions have distinctly different functions. He argued that the parasympathetic nervous system is responsible for rest and digest , maintaining basal heart rate, respiration, and metabolism under normal conditions. The sympathetic nervous system, on the other hand, governs the emergency reaction, or fight-or-flight reaction. In an emergency the body needs to respond to sudden changes in the external or internal environment, be it emotional stress, combat, athletic competition, severe change in temperature, or blood loss. For a person to respond effectively, the sympathetic nervous system increases output to the heart and other viscera, the peripheral vasculature and sweat glands, and the piloerector and certain ocular muscles. An animal whose sympathetic nervous system has been experimentally eliminated can only survive if sheltered, kept warm, and not exposed to stress or emotional stimuli. Such an animal cannot, however, carry out strenuous work or fend for itself; it cannot mobilize blood sugar from the liver quickly and does not react to cold with normal vasoconstriction or elevation of body heat. The relationship between the sympathetic and parasympathetic pathways is not as simple and as independent as suggested by Cannon, however. Both divisions are tonically active and operate in conjunction with each other and with the somatic motor system to regulate most behavior, be it normal or emergency. Although several visceral functions are controlled predominantly by one or the other division, and although both the sympathetic and parasympathetic divisions often exert opposing effects on innervated target tissues, it is the balance of activity between the two that helps maintain an internal stable environment in the face of changing external conditions. The idea of a stable internal environment in the face of changing external conditions was first proposed in the nineteenth century by the French physiologist Claude Bernard. This idea was developed further by Cannon, who put forward the concept of homeostasis as the complex physiological mechanisms that maintain the internal milieu. In his classic book The Wisdom of the Body published in 1932, Cannon introduced the concept of negative feedback regulation as a key homeostatic mechanism and outlined much of our current understanding of the functions of the autonomic nervous system. If a state remains steady, it does so because any change is automatically met by increased effectiveness of the factor or factors that resist the change. Consider, for example, thirst when the body lacks water; the discharge of adrenaline, which liberates sugar from the liver when the concentration of sugar in the blood falls below a critical point; and increased breathing, which reduces carbonic acid when the blood tends to shift toward acidity. Cannon further proposed that the autonomic nervous system, under the control of the hypothalamus, is an important part of this feedback regulation. The hypothalamus regulates many of the neural circuits that mediate the peripheral components of emotional states: changes in heart rate, blood pressure, temperature, and water and food intake. It also controls the pituitary gland and thereby regulates the endocrine system. The Visceral Motor System Overview The visceral (or autonomic) motor system controls involuntary functions mediated by the activity of smooth muscle fibers, cardiac muscle fibers, and glands. The system comprises two major divisions, the sympathetic and parasympathetic subsystems (the specialized innervation of the gut provides a further semi-independent component and is usually referred to as the enteric nervous system). Although these divisions are always active at some level, the sympatheticsystem mobilizes the body's resources for dealing with challenges of one sort or another. Conversely, parasympathetic system activity predominates during states of relative quiescence, so that energy sources previously expended can be restored. This continuous neural regulation of the expenditure and replenishment of the body's resources contributes importantly to the overall physiological balance of bodily functions called homeostasis. Whereas the major controlling centers for somatic motor activity are the primary and secondary motor cortices in the frontal lobes and a variety of related brainstem nuclei, the major locus of central control in the visceral motor system is the hypothalamus and the complex (and ill-defined) circuitry that it controls in the brainstem tegmentum and spinal cord. The status of both divisions of the visceral motor system is modulated by descending pathways from these centers to preganglionic neurons in the brainstem and spinal cord, which in turn determine the activity of the primary visceral motor neurons in autonomic ganglia. The autonomic regulation of several organ systems of particular importance in clinical practice (including cardiovascular function, control of the bladder, and the governance of the reproductive organs) is considered in more detail as specific examples of visceral motor control. Early Studies of the Visceral Motor System Although humans must always have been aware of involuntary motor reactions to stimuli in the environment (e.g., narrowing of the pupil in response to bright light, constriction of superficial blood vessels in response to cold or fear, increased heart rate in response to exertion), it was not until the late nineteenth century that the neural control of these and other visceral functions came to be understood in modern terms. The researchers who first rationalized the workings of the visceral motor system were Walter Gaskell and John Langley, two British physiologists at Cambridge University. Gaskell, whose work preceded that of Langley, established the overall anatomy of the system and carried out early physiological experiments that demonstrated some of its salient functional characteristics (e.g., that the heartbeat of an experimental animal is accelerated by stimulating the outflow of the upper thoracic spinal cord segments). Based on these and other observations, Gaskell concluded in 1866 that “every tissue is innervated by two sets of nerve fibers of opposite characters,” and he further surmised that these actions showed “the characteristic signs of opposite chemical processes.” Langley went on to establish the function of autonomic ganglia (which harbor the primary visceral motor neurons), defined the terms “preganglionic” and “postganglionic” (see next section), and coined the phrase autonomic nervous system (which is basically a synonym for “visceral motor system”; the terms are used interchangeably). Langley's work on the pharmacology of the autonomic system initiated the classical studies indicating the roles of acetylcholine and the catecholamines in autonomic function, and in neurotransmitter function more generally (see Chapter 6). In short, Langley's ingenious physiological and anatomical experiments established in detail the general proposition put forward by Gaskell on circumstantial grounds. The third major figure in the pioneering studies of the visceral motor system was Walter Cannon at Harvard Medical School, who during the early to mid-1900s devoted his career to understanding autonomic functions in relation to homeostatic mechanisms generally, and to the emotions and higher brain functions in particular (see Chapter 29). He also established the effects of denervation in the visceral motor system, laying some of the basis for much further work on what is now referred to as “neuronal plasticity” (see Chapter 25) Summary Sympathetic and parasympathetic ganglia, which contain the primary visceral-motor neurons that innervate smooth muscles, cardiac muscle, and glands, are controlled by preganglionic neurons in the spinal cord and brainstem. The sympathetic preganglionic neurons that govern ganglion cells in the sympatheticdivision of the visceral motor system arise from neurons in the thoracic and upper lumbar segments of the spinal cord; parasympathetic preganglionic neurons, in contrast, are located in the brainstem and sacral spinal cord. Sympathetic ganglion cells are distributed in the sympathetic chain (paravertebral) and prevertebral ganglia, whereas the parasympathetic motor neurons are more widely distributed in ganglia that lie within or near the organs they control. Most autonomic targets receive inputs from both the sympathetic and parasympathetic systems, which act in a generally antagonistic fashion. The diversity of autonomic functions is achieved primarily by different types of receptors for the two primary classes of postganglionic autonomic neurotransmitters, norepinephrine in the case of the sympathetic division and acetylcholine in the parasympathetic division. The visceral motor system is regulated by sensory feedback provided by dorsal root and cranial nerve sensory ganglion cells that make local reflex connections in the spinal cord or brainstem and project to the nucleus of the solitary tract in the brainstem, and by descending pathways from the hypothalamus and brainstem tegmentum, the major controlling centers of the visceral motor system (and of homeostasis more generally). The importance of the visceral motor control of organs such as the heart, bladder, and reproductive organs—and the many pharmacological means of modulating autonomic function—have made visceral motor control a central theme in clinical medicine.
Figure 49-1 Anatomical organization of the somatic and autonomic motor pathways. A. In the somatic motor system, effector motor neurons in the central nervous system project directly to skeletal muscles. B. In the autonomic motor system, the effector motor neurons are located in ganglia outside the central nervous system and are controlled by preganglionic central neurons. The Motor Neurons of the Autonomic Nervous System Lie Outside the Central Nervous System In the somatic motor system the motor neurons are part of the central nervous system: They are located in the spinal cord and brain stem and project directly to skeletal muscle. In contrast, the motor neurons of the sympathetic and parasympathetic motor systems are located outside the spinal cord in the autonomic ganglia. The autonomic motor neurons (also known as postganglionic neurons ) are activated by the axons of central neurons (the preganglionic neurons ) whose cell bodies are located in the spinal cord or brain stem, much as are the somatic motor neurons. Thus, in the visceral motor system a synapse (in the autonomic ganglion) is interposed between the efferent neuron in the central nervous system and the peripheral target (Figure 49-1). The sympathetic and parasympathetic nervous systems have clearly defined sensory components that provide input to the central nervous system and play an important role in autonomic reflexes. In addition, some sensory fibers that project to the spinal cord also send a branch to autonomic ganglia, thus forming reflex circuits that control some visceral autonomic functions. The innervation of target tissues by autonomic nerves also differs markedly from that of skeletal muscle by somatic motor nerves. Unlike skeletal muscle, which has specialized postsynaptic regions (the end-plates; see Chapter 14), target cells of the autonomic nerve fibers have no specialized postsynaptic sites. Nor do the postganglionic nerve endings have presynaptic specializations such as the active zones of somatic motor neurons. Instead, the nerve endings have several swellings ( varicosities ) where vesicles containing transmitter substances accumulate (see Chapter 15). Synaptic transmission therefore occurs at multiple sites along the highly branched axon terminals of autonomic nerves. The neurotransmitter may diffuse for distances as great as several hundred nanometers to reach its targets. In contrast to the point-to-point contacts made in the somatic motor system, neurons in the autonomic motor system exert a more diffuse control over target tissues, so that a relatively small number of highly branched motor fibers can regulate the function of large masses of smooth muscle or glandular tissue.
course of preganglionic and postganglionic sympathetic fibers innervating different organs. (A) Organs in the head. (B) Organs in the chest. (C) Organs in the abdomen. (D) Adrenal gland. Also note that, at each level, the axons of the postganglionic neurons in the paravertebral ganglia re-enter the corresponding spinal nerves through gray rami, travel within or along the spinal nerve, and innervate the blood vessels, sweat glands, and erectile muscle of hair follicles. Figure 21.2. Organization of the preganglionic spinal outflow to sympathetic ganglia. (A) General organization of the sympathetic division of the visceral motor system in the spinal cord and the preganglionic outflow to the sympathetic ganglia that contain the primary visceral motor neurons. (B) Cross section of thoracic spinal cord at the level indicated, showing location of the sympathetic preganglionic neurons in the intermediolateral cell column of the lateral horn. Sympathetic Pathways Convey Thoracolumbar Outputs to Ganglia Alongside the Spinal Cord Preganglionic sympathetic neurons form a column in the intermediolateral horn of the spinal cord extending from the first thoracic spinal segment to rostral lumbar segments. The axons of these neurons leave the spinal cord in the ventral root and initially run together in the spinal nerve. They then separate from the somatic motor axons and project (in small bundles called white myelinated rami ) to the ganglia of the sympathetic chains , which lie along each side of the spinal cord (Figure 49-2). Axons of preganglionic neurons exit the spinal cord at the level at which their cell bodies are located, but they may innervate sympathetic ganglia situated either more rostrally or more caudally by traveling in the sympathetic nerve trunk that connects the ganglia (Figure 49-2). Most of the preganglionic axons are relatively slow-conducting, small-diameter myelinated fibers. Each preganglionic fiber forms synapses with many postganglionic neurons in different ganglia. Overall, the ratio of preganglionic fibers to postganglionic fibers in the sympathetic nervous system is about 1:10. This divergence permits coordinated activity in sympathetic neurons at several different spinal levels. The axons of postganglionic neurons are largely unmyelinated and exit the ganglia in the gray unmyelinated rami. The postganglionic cells that innervate structures in the head are located in the superior cervical ganglion, which is a rostral extension of the sympathetic chain. The axons of these cells travel along branches of the carotid arteries to their targets in the head. The postganglionic fibers innervating the rest of the body travel in spinal nerves to their targets; in an average spinal nerve about 8% of the fibers are sympathetic postganglionic axons. Some neurons of the cervical and upper thoracic ganglia innervate cranial blood vessels, sweat glands, and hair follicles; others innervate the glands and visceral organs of the head and chest, including the lacrimal and salivary glands, heart, lungs, and blood vessels. Neurons in the lower thoracic and lumbar paravertebral ganglia innervate peripheral blood vessels, sweat glands, and pilomotor smooth muscle (Figure 49-3). Some preganglionic fibers pass through the sympathetic ganglia and branches of the splanchnic nerves to synapse on neurons of the prevertebral ganglia , which include the coeliac ganglion and the superior and inferior mesenteric ganglia (Figure 49-3). Neurons in these ganglia innervate the gastrointestinal system and the accessory gastrointestinal organs, including the pancreas and liver, and also provide sympathetic innervation of the kidneys, bladder, and genitalia. Another group of preganglionic axons runs in the thoracic splanchnic nerve into the abdomen and innervates the adrenal medulla, which is an endocrine gland, secreting both epinephrine and norepinephrine into circulation. The cells of the adrenal medulla are developmentally and functionally related to postganglionic sympathetic neurons.
Figure 21.3. Organization of the preganglionic outflow to parasympathetic ganglia. (A) Dorsal view of brainstem showing the location of the nuclei of the cranial part of the parasympathetic division of the visceral motor system. (B) Cross section of the brainstem at the relevant levels [indicated by blue lines in (A)] showing location of these parasympathetic nuclei. (C) Main features of the parasympathetic preganglionics in the sacral segments of the spinal cord. (D) Cross section of the sacral spinal cord showing location of sacral preganglionic neurons Parasympathetic Pathways Convey Outputs From the Brain Stem Nuclei and Sacral Spinal Cord to Widely Dispersed Ganglia The central, preganglionic cells of the parasympathetic nervous system are located in several brain stem nuclei and in segments S2-S4 of the sacral spinal cord (Figure 49-3). The axons of these cells are quite long because parasympathetic ganglia lie close to or are actually embedded in visceral target organs. In contrast, sympathetic ganglia are located at some distance from their targets. The preganglionic parasympathetic nuclei in the brain stem include the Edinger-Westphal nucleus (associated with cranial nerve III), the superior and inferior salivary nuclei (associated with cranial nerves VII and IX, respectively), and the dorsal vagal nucleus and the nucleus ambiguus (both associated with cranial nerve X). Preganglionic axons exiting the brain stem through cranial nerves III, VII, and IX and project to postganglionic neurons in the ciliary, pterygopalatine, submandibular, and otic ganglia (Figure 49-3). Parasympathetic preganglionic fibers from the dorsal vagal nucleus project via nerve X to postganglionic neurons embedded in thoracic and abdominal targets—the stomach, liver, gall bladder, pancreas, and upper intestinal tract (Figure 49-3). Neurons of the ventrolateral nucleus ambiguus provide the principal parasympathetic innervation of the cardiac ganglia, which innervate the heart, esophagus, and respiratory airways. In the sacral spinal cord the parasympathetic preganglionic neurons occupy the intermediolateral column. Axons of spinal parasympathetic neurons leave the spinal cord through the ventral roots and project in the pelvic nerve to the pelvic ganglion plexus. Pelvic ganglion neurons innervate the descending colon, bladder, and external genitalia (Figure 49-3). The sympathetic nervous system innervates tissues throughout the body, but the parasympathetic distribution is more restricted. There is also less divergence, with an average ratio of preganglionic to postganglionic fibers of about 1:3; in some tissues the numbers may be nearly equal.
Figure 21.5. Organization of sensory input to the visceral motor system. (A) Afferent input from the cranial nerves relevant to visceral sensation (as well as afferent input ascending from the spinal cord not shown here) converge on the nucleus of the solitary tract. (B) Cross section of the brainstem showing the location of the nucleus of the solitary tract, which is so-named because of its association with the tract of the myelinated axons that supply it. Sensory Components of the Visceral Motor System The visceral motor system clearly requires sensory feedback to control and modulate its many functions. As in the case of somatic sensory modalities (see Chapters 9 and 10 ), the cell bodies of the visceral afferent fibers lie in the dorsal root ganglia or the sensory ganglia associated with cranial nerves (in this case, the vagus, glossopharyngeal, and facial nerves) ( Figure 21.5A ). The neurons in the dorsal root ganglia send an axon peripherally to end in sensoryreceptor specializations, and an axon centrally to terminate in a part of the dorsal horn of the spinal cord near the lateral horn, where the preganglionic neurons of both sympathetic and parasympathetic divisions are located. In addition to making local reflex connections, branches of these visceral sensoryneurons also travel rostrally to innervate nerve cells in the brainstem; in this case, however, the target is the nucleus of the solitary tract in the upper medulla ( Figure 21.5B ). The afferents from viscera in the head and neck that enter the brainstem via the cranial nerves also terminate in the nucleus of the solitary tract (see Figure 21.5B ). This nucleus, as described in the next section, integrates a wide range of visceral sensory information and transmits to the hypothalamus and to the relevant motor nuclei in the brainstem tegmentum. Sensory fibers related to the viscera convey only limited information to consciousness—primarily pain. Nonetheless, the visceral afferent information of which we are not aware is essential for the functioning of autonomic reflexes. Specific examples described in more detail later in the chapter include afferent information relevant to cardiovascular control, to the control of the bladder, and to the governance of sexual functions (although sexual reflexes are, exceptionally, not mediated by the nucleus of the solitary tract) Sensory Inputs Produce a Wide Range of Visceral Reflexes To maintain homeostasis the autonomic nervous system responds to many different types of sensory inputs. Some of these are somatosensory. For example, a noxious stimulus activates sympathetic neurons that regulate local vasoconstriction (necessary to reduce bleeding when the skin is broken). At the same time, the stimulus activates nociceptive afferents in the spinothalamic tract with axon collaterals to an area in the rostral ventrolateral medulla that coordinates reflexes. These inputs cause widespread sympathetic activation that increases blood pressure and heart rate to protect arterial perfusion pressure and prepares the individual for vigorous defense. Homeostasis also requires important information about the internal state of the body. Much of this information from the thoracic and abdominal cavities reaches the brain via the vagus nerve. The glossopharyngeal nerve also conveys visceral sensory information from the head and neck. Both of these nerves and the facial nerve relay special visceral sensory information about taste (a visceral chemosensory function) from the oral cavity. All of these visceral sensory afferents synapse in a topographic fashion in the nucleus of the solitary tract. Taste information is represented most anteriorly; gastrointestinal information, in an intermediate position; cardiovascular inputs, caudomedially; and respiratory inputs, in the caudolateral part of the nucleus.
Neurotransmission in the Visceral Motor System The neurotransmitter functions of the visceral motor system are of enormous importance in clinical practice, and drugs that act on the autonomic system are among the most important in the clinical armamentarium. Moreover, autonomic transmitters have played a major role in the history of efforts to understand synaptic function. Consequently, neurotransmission in the visceral motor system deserves special comment (see also Chapter 6 ). Acetylcholine is the primary neurotransmitter of both sympathetic and parasympathetic preganglionic neurons. Nicotinic receptors on autonomic ganglion cells are ligand-gated ion channels that mediate a so-called fast EPSP (much like nicotinic receptors at the neuromuscular junction). In contrast, muscarinic acetylcholine receptors on ganglion cells are members of the 7-transmembrane G protein-linked receptor family, and they mediate slower synaptic responses (see Chapters 7 and 8 ). The primary action of muscarinic receptors in autonomic ganglion cells is to close K+ channels, making the neurons more excitable and generating a prolonged EPSP. As a result of these two acetylcholine receptor types, ganglionic synapses mediate both rapid excitation and a slower modulation of autonomic ganglion cell activity. The postganglionic effects of autonomic ganglion cells on their smooth muscle, cardiac muscle, or glandular targets are mediated by two primary neurotransmitters: norepinephrine (NE) and acetylcholine (ACh). For the most part, sympathetic ganglion cells release norepinephrine onto their targets (a notable exception is the cholinergic sympathetic innervation of sweat glands), whereas parasympathetic ganglion cells typically release acetylcholine. As expected from the foregoing account, these two neurotransmitters usually have opposing effects on their target tissue—contraction versus relaxation of smooth muscle, for example. As described in Chapters 6 to 8 , the specific effects of either ACh or NE are determined by the type of receptor expressed in the target tissue, and the downstream signaling pathways to which these receptors are linked. Peripheral sympathetic targets generally have two subclasses of noradrenergic receptors in their cell membranes, referred to as α and β receptors. Like muscarinic ACh receptors, both α and β receptors and their subtypes belong to the 7-transmembrane G-protein-coupled class of cell surface receptors. The different distribution of these receptors in sympathetic targets allows for a variety of postsynaptic effects mediated by norepinephrine released from postganglionic sympathetic nerve endings ( Table 21.2 ). The effects of acetylcholine released by parasympathetic ganglion cells onto smooth muscles, cardiac muscle, and glandular cells also vary according to the subtypes of muscarinic cholinergic receptors found in the peripheral target ( Table 21.3 ). The two major subtypes are known as M1 and M2 receptors, M1 receptors being found primarily in the gut and M2 receptors in the cardiovascular system (another subclass of muscarinic receptors, M3, occurs in both smooth muscle and glandular tissues). Muscarinic receptors are coupled to a variety of intracellular signal transduction mechanisms that modify K+ and Ca2+channel conductances. They can also activate nitric oxide synthase, which promotes the local release of NO in some parasympathetic target tissues (see, for example, the section on autonomic control of sexual function). In contrast to the relatively restricted responses generated by norepinephrine and acetylcholine released by sympathetic and parasympathetic ganglion cells, respectively, neurons of the enteric nervous system achieve an enormous diversity of target effects by virtue of many different neurotransmitters, most of which are neuropeptides associated with specific cell groups in either the myenteric or submucous plexuses mentioned earlier. The details of these agents and their actions are beyond the scope of this introductory account Box 49-1 First Isolation of a Chemical Transmitter The existence of chemical messengers was first postulated by John Langley and Henry Dale and their students on the basis of their pharmacological studies dating from the beginning of the century. However, convincing evidence for a neurotransmitter was not provided until 1920, when Otto Loewi, in a simple but decisive experiment, examined the autonomic innervation of two isolated, beating frog hearts. In his own words: The night before Easter Sunday of that year I awoke, turned on the light, and jotted down a few notes on a tiny slip of paper. Then I fell asleep again. It occurred to me at six o'clock in the morning that during the night I had written down something most important, but I was unable to decipher the scrawl. The next night, at three o'clock, the idea returned. It was the design of an experiment to determine whether or not the hypothesis of chemical transmission that I had uttered seventeen years ago was correct. I got up immediately, went to the laboratory, and performed a simple experiment on a frog heart according to the nocturnal design. I have to describe briefly this experiment since its results became the foundation of the theory of chemical transmission of the nervous impulse. The hearts of two frogs were isolated, the first with its nerves, the second without. Both hearts were attached to Straub cannulas filled with a little Ringer solution. The vagus nerve of the first heart was stimulated for a few minutes. Then the Ringer solution that had been in the first heart during the stimulation of the vagus was transferred to the second heart. It slowed and its beat diminished just as if its vagus had been stimulated. Similarly, when the accelerator nerve was stimulated and the Ringer from this period transferred, the second heart speeded up and its beat increased. These results unequivocally proved that the nerves do not influence the heart directly but liberate from their terminals specific chemical substances which, in their turn, cause the well-known modifications of the function of the heart characteristic of the stimulation of its nerves. Loewi called this substance Vagusstoff (vagus substance). Soon after, Vagusstoff was identified chemically as acetylcholine. The nucleus of the solitary tract distributes visceral sensory information within the brain along three main pathways. Some neurons in the nucleus of the solitary tract directly innervate preganglionic neurons in the medulla and spinal cord, triggering direct autonomic reflexes. For example, there are direct inputs from the nucleus of the solitary tract to vagal motor neurons controlling esophageal and gastric motility, which are important for ingesting food. Also, projections from the nucleus of the solitary tract to the spinal cord are involved in respiratory reflex responses to lung inflation. Other neurons in the nucleus project to the lateral medullary reticular formation, where they engage populations of premotor neurons that organize more complex, patterned autonomic reflexes. For example, groups of neurons in the rostral ventrolateral medulla control blood pressure by regulating both blood flow to different vascular beds and vagal tone in the heart to modulate heart rate. Other groups of neurons control complex responses such as vomiting and respiratory rhythm (a somatic motor response that has an important autonomic component and that depends critically on visceral sensory information). The third main projection from the nucleus of the solitary tract provides visceral sensory input to a network of cell groups that extend from the pons and midbrain up through the hypothalamus, amygdala, and cerebral cortex. This network coordinates autonomic responses and integrates them into ongoing patterns of behavior. These will be described in more detail after we consider more elementary autonomic reflexes. Autonomic Neurons Use a Variety of Chemical Transmitters Autonomic ganglion cells receive and integrate inputs from both the central nervous system (through preganglionic nerve terminals) and the periphery (through branches of sensory nerves that terminate in the ganglia). Most of the sensory fibers are nonmyelinated and may release neuropeptides, such as substance P and calcitonin gene-related peptide (CGRP), onto ganglion cells. Preganglionic fibers primarily use ACh and norepinephrine as transmitters. Ganglionic Transmission Involves Both Fast and Slow Synaptic Potentials Preganglionic activity induces both brief and prolonged responses from postganglionic neurons. ACh released from preganglionic terminals evokes fast excitatory postsynaptic potentials (EPSPs) mediated by nicotinic ACh receptors. The fast EPSP is often large enough to generate an action potential in the postganglionic neuron, and it is thus regarded as the principal synaptic pathway for ganglionic transmission in both the sympathetic and parasympathetic systems. ACh also evokes slow EPSPs and inhibitory postsynaptic potentials (IPSPs) in postganglionic neurons. These slow potentials can modulate the excitability of these cells. They have been most often studied in sympathetic ganglia but are also known to occur in some parasympathetic ganglia. Slow EPSPs or IPSPs are mediated by muscarinic ACh receptors (Figure 49-6). The slow excitatory potential results when Na+ and Ca2+ channels open and M-type K+ channels close. The M-type channels are normally active at the resting membrane potential, so their closure leads to membrane depolarization (Chapter 13). The slow inhibitory potential results from the opening of K+ channels, allowing K+ ions to flow out of the nerve terminals, resulting in hyperpolarization. The fast cholinergic EPSP reaches a maximum within 10-20 ms; the slow cholinergic synaptic potentials take up to half a second to reach their maximum and last for a second or more (Figure 49-6). Even slower synaptic potentials, lasting up to a minute, are evoked by neuropeptides, a variety of which are present in the terminals of preganglionic neurons and sensory nerve endings. The actions of one peptide have been studied in detail and reveal important features of peptidergic transmission. In some, but not all, preganglionic nerve terminals in bullfrog sympathetic ganglia, ACh is colocalized with a luteinizing hormone-releasing hormone (LHRH)-like peptide. High-frequency stimulation of the preganglionic nerves causes the peptide to be released, evoking a slow, long-lasting EPSP in all postganglionic neurons (Figure 49-6), even those not directly innervated by the peptidergic fibers. The peptide must diffuse over considerable distances to influence distant receptive neurons. The slow peptidergic EPSP, like the slow cholinergic excitatory potential, also results from the closure of M-type channels and the opening of Na+ and Ca2+ channels. The peptidergic excitatory potential alters the excitability of autonomic ganglion cells for long periods after intense activation of preganglionic inputs. No mammalian equivalent of the actions of the LHRH-like peptide in amphibians has yet been identified, but the neuropeptide substance P released from sensory afferent terminals in mammals evokes a similar slow, long-lasting EPSP. Norepinephrine and Acetylcholine Are the Predominant Transmitters in the Autonomic Nervous System Most postganglionic sympathetic neurons release norepinephrine, which acts on a variety of different adrenergic receptors. There are five major types of adrenergic receptors, and these are the target for several medically important drugs (Table 49-1). ATP and Adenosine Have Potent Extracellular Actions Adenosine triphosphate (ATP) is an important cotransmitter with norepinephrine in many postganglionic sympathetic neurons. By acting on ATP-gated ion channels (P2 purinergic receptors), it is responsible for some of the fast responses seen in target tissues (Table 49-1). The proportion of ATP to norepinephrine varies considerably in different sympathetic nerves. The ATP component is relatively minor in nerves to blood vessels in the rat tail and rabbit ear, while the responses of guinea pig submucosal arterioles to sympathetic stimulation appear to be mediated solely by ATP. The nucleotide adenosine is formed from the hydrolysis of ATP and is recognized by P1 purinergic receptors (Table 49-1) located both pre- and postjunctionally. It is thought to play a modulatory role in autonomic transmission, particularly in the sympathetic system. Adenosine may dampen sympathetic function after intense sympathetic activation by activating receptors on sympathetic nerve endings that inhibit further norepinephrine and ATP release. Adenosine also has inhibitory actions in cardiac and smooth muscle that tend to oppose the excitatory actions of norepinephrine. Many Different Neuropeptides Are Present in Autonomic Neurons Neuropeptides are colocalized with norepinephrine and ACh in autonomic neurons. Cholinergic preganglionic neurons in the spinal cord and brain stem and their terminals in autonomic ganglia may contain enkephalins, neurotensin, somatostatin, or substance P. Noradrenergic postganglionic sympathetic neurons may also express a variety of neuropeptides. Neuropeptide Y is present in as many as 90% of the cells and modulates sympathetic transmission. In tissues in which the nerve endings are distant from their targets (more than 60 nm, as for the rabbit ear artery), neuropeptide Y potentiates both the purinergic and adrenergic components of the tissue response, probably by acting postsynaptically. In contrast, in tissues with dense sympathetic innervation and where the target is closer (20 nm, such as the vas deferens), neuropeptide Y acts presynaptically to inhibit release of ATP and norepinephrine, thus dampening the tissue response. The peptides galanin and dynorphin are often found with neuropeptide Y in sympathetic neurons, which can contain several neuropeptides. Cholinergic postganglionic sympathetic neurons commonly contain CGRP and vasoactive intestinal polypeptide (VIP) (Figure 49-7).
Visceral Motor Reflex Functions Many examples of specific autonomic functions could be used to illustrate in more detail how the visceral motor system operates. The three outlined here—control of cardiovascular function, control of the bladder, and control of sexual function—have been chosen primarily because of their importance in human physiology and clinical practice Autonomic Regulation of Cardiovascular Function The cardiovascular system is subject to precise reflex regulation so that an appropriate supply of oxygenated blood can be reliably provided to different body tissues under a wide range of circumstances. The sensory monitoring for this critical homeostatic process entails primarily mechanical (barosensory) information about pressure in the arterial system and, secondarily, chemical (chemosensory) information about the level of oxygen and carbon dioxide in the blood. The parasympathetic and sympathetic activity relevant to cardiovascular control is determined by the information supplied by these sensors. The mechanoreceptors (called baroreceptors) are located in the heart and major blood vessels; the chemoreceptors are located primarily in the carotid bodies, which are small, highly specialized organs located at the bifurcation of the common carotid arteries (some chemosensory tissue is also found in the aorta). The nerve endings in baroreceptors are activated by deformation as the elastic elements of the vessel walls expand and contract. The chemoreceptors in the carotid bodies and aorta respond directly to the partial pressure of oxygen and carbon dioxide in the blood. Both afferent systems convey their status via the vagus nerve to the nucleus of the solitary tract ( Figure 21.7 ), which relays this information to the hypothalamus and the relevant brainstem tegmental nuclei (see earlier). The afferent information from changes in arterial pressure and blood gas levels reflexively modulates the activity of the relevant visceral motor pathways and, ultimately, of target smooth and cardiac muscles and other more specialized structures. For example, a rise in blood pressure activates baroreceptors that, via the pathway illustrated in Figure 21.7 , inhibit the tonic activity of sympathetic preganglionic neurons in the spinal cord. In parallel, the pressure increase stimulates the activity of the parasympathetic preganglionic neurons in the dorsal motor nucleus of the vagus and the nucleus ambiguus that influence heart rate. The carotid chemoreceptors also have some influence, but this is a less important drive than that stemming from the baroreceptors. As a result of this shift in the balance of sympathetic and parasympathetic activity, the stimulatory noradrenergic effects of postganglionic sympathetic innervation on the cardiac pacemaker and cardiac musculature is reduced (an effect abetted by the decreased output of catecholamines from the adrenal medulla and the decreased vasoconstrictive effects of sympathetic innervation on the peripheral blood vessels). At the same time, activation of the cholinergic parasympathetic innervation of the heart decreases the discharge rate of the cardiac pacemaker in the sinoatrial node and slows the ventricular conduction system. These parasympathetic influences are mediated by an extensive series of parasympathetic ganglia in and near the heart, which release acetylcholine onto cardiac pacemaker cells and cardiac muscle fibers. As a result of this combination of sympathetic and parasympathetic effects, heart rate and the effectiveness of the atrial and ventricular mycoardial contraction are reduced and the peripheral arterioles dilate, thus lowering the blood pressure. In contrast to this sequence of events, a drop in blood pressure, as might occur from blood loss, has the opposite effect, inhibiting parasympathetic activity while increasing sympathetic activity. As a result, norepinephrine is released from sympathetic postganglionic terminals, increasing the rate of cardiac pacemaker activity and enhancing cardiac contractility, at the same time increasing release of catecholamines from the adrenal medulla (which further augments these and many other sympathetic effects that enhance the response to this threatening situation). Norepinephrine released from the terminals of sympathetic ganglion cells also acts on the smooth muscles of the arterioles to increase the tone of the peripheral vessels, particularly those in the skin, subcutaneous tissues, and muscles, thus shunting blood away from these tissues to those organs where oxygen and metabolites are urgently needed to maintain function (e.g., brain, heart, and kidneys in the case of blood loss). If these reflex sympathetic responses fail to raise the blood pressure sufficiently (in which case the patient is said to be in shock), the vital functions of these organs begin to fail, often catastrophically. A more mundane circumstance that requires a reflex autonomic response to a fall in blood pressure is standing up. Rising quickly from a prone position produces a shift of some 300–800 milliliters of blood from the thorax and abdomen to the legs, resulting in a sharp (approximately 40%) decrease in the output of the heart. The adjustment to this normally occurring drop in blood pressure (called orthostatic hypotension) must be rapid and effective, as evidenced by the dizziness sometimes experienced in this situation. Indeed, normal individuals can briefly lose consciousness as a result of blood pooling in the lower extremities, which is the usual cause of fainting among healthy individuals who must stand still for abnormally long periods (the “Beefeaters” who guard Buckingham Palace, for example). The sympathetic innervation of the heart arises from the preganglionic neurons in the intermediolateral column of the spinal cord, extending from roughly the first through fifth thoracic segments (see Table 21.1 ). The primary visceral motor neurons are in the adjacent thoracic paravertebral and prevertebral ganglia of the cardiac plexus. The parasympathetic preganglionics, as already mentioned, are in the dorsal motor nucleus of the vagus nerve and the nucleus ambiguus, projecting to parasympathetic ganglia in and around the heart and great vessels
Figure 21.4. Organization of the enteric component of the visceral motor system. (A) Sympathetic and parasympathetic innervation of the enteric nervous system, and the intrinsic neurons of the gut. (B) Detailed organization of nerve cell plexuses in the gut wall. The neurons of the submucus plexus (Meissner's plexus) are concerned with the secretory aspects of gut function, and the myenteric plexus (Auerbach's plexus) with the motor aspects of gut function (e.g., peristalsis). The Enteric Nervous System An enormous number of neurons are specifically associated with the gastrointestinal tract to control its many functions; indeed, more neurons are said to reside in the human gut than in the entire spinal cord. As already noted, the activity of the gut is modulated by both the sympathetic and the parasympathetic divisions of the visceral motor system. However, the gut also has an extensive system of nerve cells in its wall (as do its accessory organs such as the pancreas and gallbladder) that do not fit neatly into the sympathetic or parasympathetic divisions of the visceral motor system ( Figure 21.4A ). To a surprising degree, these neurons and the complex enteric plexuses in which they are found ( plexus means “network”) operate more or less independently according to their own reflex rules; as a result, many gut functions continue perfectly well without sympathetic or parasympathetic supervision (peristalsis, for example, occurs in isolated gut segments in vitro). Thus, most investigators prefer to classify the enteric nervous system as a separate component of the visceral motor system. The neurons in the gut wall include local and centrally projecting sensory neurons that monitor mechanical and chemical conditions in the gut, local circuit neurons that integrate this information, and motor neurons that influence the activity of the smooth muscles in the wall of the gut and glandular secretions (e.g., of digestive enzymes, mucus, stomach acid, and bile). This complex arrangement of nerve cells intrinsic to the gut is organized into: (1) the myenteric (or Auerbach's) plexus, which is specifically concerned with regulating the musculature of the gut; and (2) the submucus (or Meissner's) plexus, which is located, as the name implies, just beneath the mucus membranes of the gut and is concerned with chemical monitoring and glandular secretion ( Figure 21.4B ). As already mentioned, the preganglionic parasympathetic neurons that influence the gut are primarily in the dorsal motor nucleus of vagus nerve in the brainstem and the intermediate gray zone in the sacral spinal cord segments. The preganglionic sympathetic innervation that modulates the action of the gut plexuses derives from the thoraco-lumbar cord, primarily by way of the celiac, superior, and inferior mesenteric ganglia.
Autonomic Regulation of the Bladder The autonomic regulation of the bladder provides a good example of the interplay between the voluntary motor system (obviously, we have voluntary control over urination), and the sympathetic and parasympathetic divisions of the visceral motor system, which operate involuntarily. The arrangement of afferent and efferent innervation of the bladder is shown in Figure 21.8 . The parasympathetic control of the bladder musculature, the contraction of which causes bladder emptying, originates with neurons in the sacral spinal cord segments (S2–S4) that innervate visceral motor neurons in parasympathetic ganglia in or near the bladder wall. Mechanoreceptors in the bladder wall supply visceral afferent information to the spinal cord and to higher autonomic centers in the brainstem (primarily the nucleus of the solitary tract), which in turn project to the various central coordinating centers for bladder function in the brainstem tegmentum and elsewhere. The sympathetic innervation of the bladder originates in the lower thoracic and upper lumbar spinal cord segments (T10-L2), the preganglionic axons running to sympathetic neurons in the inferior mesenteric ganglion and the ganglia of the pelvic plexus. The postganglionic fibers from these ganglia travel in the hypogastric and pelvic nerves to the bladder, where sympathetic activity causes the internal urethral sphincter to close (postganglionic sympathetic fibers also innervate the blood vessels of the bladder, and in males the smooth muscle fibers of the prostate gland). Stimulation of this pathway in response to a modest increase in bladder pressure from the accumulation of urine thus closes the internal sphincter and inhibits the contraction of the bladder wall musculature, allowing the bladder to fill. At the same time, moderate distension of the bladder inhibits parasympathetic activity (which would otherwise contract the bladder and allow the internal sphincter to open). When the bladder is full, afferent activity conveying this information centrally increases parasympathetic tone and decreases sympathetic activity, causing the internal sphincter muscle to relax and the bladder to contract. In this circumstance, the urine is held in check by the voluntary (somatic) motor innervation of the external urethral sphincter muscle (see Figure 21.8 ). The voluntary control of the external sphincter is mediated by α-motor neurons of the ventral horn in the sacral spinal cord segments (S2–S4), which cause the striated muscle fibers of the sphincter to contract. During bladder filling (and subsequently, until circumstances permit urination) these neurons are active, keeping the external sphincter closed and preventing bladder emptying. During urination (or “voiding,” as clinicians often call this process), this tonic activity is temporarily inhibited, leading to relaxation in the external sphincter muscle. Thus, urination results from the coordinated activity of sacral parasympathetic neurons and temporary inactivity of the α-motor neurons of the voluntary motor system. The central governance of these events stems from the rostral pons, the relevant pontine circuitry being referred to as the micturition center ( micturition is also “medicalese” for urination). This phrase implies more knowledge about the central control of bladder function than is actually available. As many as five other central regions have been implicated in the coordination of urinary functions, including the locus coeruleus, the hypothalamus, the septal nuclei, and several cortical regions. The cortical regions primarily concerned with the voluntary control of bladder function include the paracentral lobule, the cingulate gyrus, and the frontal lobes. This functional distribution accords the motor representation of perineal musculature in the medial part of the primary motorcortex (see Chapter 17 ), and the planning functions of the frontal lobes (see Chapter 26 ), which are equally pertinent to bodily functions (remembering to stop by the bathroom before going on a long trip, for instance). Importantly, paraplegic patients, or patients who have otherwise lost descending control of the sacral spinal cord, continue to exhibit autonomic regulation of bladder function, since urination is eventually stimulated reflexively at the level of the sacral cord by sufficient bladder distension. Unfortunately, this reflex is not efficient in the absence of descending motor control, resulting in a variety of problems in paraplegics and others with diminished or absent central control of bladder function. The major difficulty is incomplete bladder emptying, which often leads to chronic urinary tract infections from the culture medium provided by retained urine, and thus the need for an indwelling catheter to ensure adequate drainage Urogenital Reflexes The control of bladder emptying is unusual because it involves both involuntary autonomic reflexes and some voluntary control. The excitatory input to the bladder wall that causes contraction and promotes emptying is parasympathetic. Activation of parasympathetic postganglionic neurons in the pelvic ganglion plexus near to and within the bladder wall contracts the bladder's smooth muscle. These neurons are quiet when the bladder begins to fill but are activated reflexly by visceral afferents when the bladder is distended. The sympathetic nervous system relaxes the bladder smooth muscle. Axons of preganglionic sympathetic neurons project from the thoracic and upper lumbar spinal cord to the inferior mesenteric ganglion. From there, postganglionic fibers travel to the bladder in the hypogastric nerve. When the sympathetic system is activated by low-frequency firing in sensory afferents that respond to tension in the bladder wall, the parasympathetic neurons in the pelvic ganglion are inhibited, relaxing bladder smooth muscle and exciting the internal sphincter muscle. Thus, during bladder filling the sympathetic system promotes relaxation of the bladder wall directly while maintaining closure of the internal sphincter. Somatic motor neurons in the ventral horn of the sacral spinal cord innervate striated muscle fibers in the external urethral sphincter, causing it to contract. These motor neurons are stimulated by visceral afferents that are activated when the bladder is partially full. As the bladder fills, spinal sensory afferents relay this information to a region in the pons that coordinates micturition. This pontine area, sometimes called Barrington's nucleus after the British neurophysiologist who first described it, also receives important descending inputs from the forebrain concerning behavioral cues for emptying the bladder. Descending pathways from Barrington's nucleus cause coordinated inhibition of sympathetic and somatic systems, relaxing both sphincters. The onset of urinary flow through the urethra causes reflex contraction of the bladder that is under parasympathetic control. In patients with spinal cord injuries at the cervical or thoracic levels, the spinal reflex control of micturition remains intact, but the connections with the pons are severed. As a result, micturition cannot be voluntarily controlled. When it does occur as a spinal reflex resulting from bladder overfilling, urination is incomplete. As a result, urinary tract infections are common, and it may be necessary to empty the bladder mechanically by catheterization Sexual reflexes are organized in a pattern that is analogous to those controlling bladder function. Erectile tissue is controlled largely by the parasympathetic nervous system, involving neurons that produce nitrous oxide as their main mediator. Glandular secretion is also parasympathetically mediated. Ejaculation in males is caused by sympathetic control of the seminal vesicles and vas deferens, and emission involves control of striated muscles in the pelvic floor as well. Supraspinal inputs play an important role in producing the coordinated pattern of sexual response, although some simple sexual reflexes can be activated even after spinal transection (eg, penile erection can be elicited by local sensory stimuli).
Autonomic Regulation of Sexual Function Much like control of the bladder, sexual responses are mediated by the coordinated activity of sympathetic, parasympathetic, and somatic innervation. Although these reflexes differ in detail in males and females, basic similarities allow the two sexes to be considered together, not only in humans but in mammals generally (see Chapter 30 ). The relevant autonomic effects include: (1) the mediation of vascular dilation, which causes penile or clitoral erection; (2) stimulation of prostatic or vaginal secretions; (3) smooth muscle contraction of the vas deferens during ejaculation or rhythmic vaginal contractions during orgasm in females; and (4) contractions of the somatic pelvic muscles that accompany orgasm in both sexes. Like the urinary tract, the reproductive organs receive preganglionic parasympathetic innervation from the sacral spinal cord, preganglionic sympathetic innervation from the outflow of the lower thoracic and upper lumbar spinal cord segments, and somatic motor innervation from α-motor neurons in the ventral horn of the lower spinal cord segments ( Figure 21.9 ). The sacral parasympathetic pathway controlling the sexual organs in both males and females originates in the sacral segments S2–S4 and reaches the target organs via the pelvic nerves. Activity of the postganglionic neurons in the relevant parasympathetic ganglia causes dilation of penile or clitoral arteries, and a corresponding relaxation of the smooth muscles of the venous (cavernous) sinusoids, which leads to expansion of the sinusoidal spaces. As a result, the amount of blood in the tissue is increased, leading to a sharp rise in the pressure and an expansion of the cavernous spaces (i.e., erection). The mediator of the smooth muscle relaxation leading to erection is not acetylcholine (as in most postganglionic parasympathetic actions), but nitric oxide (see Chapter 8 ). The drug sildenafil (Viagra®), for instance, acts by stimulating the activity of guanylate cyclase, which increases the conversion of GTP to cyclic GMP, mimicking the action of NO on the c-GMP pathway, thus enhancing the relaxation of the venous sinusoids and promoting erection in males with erectile dysfunction. Parasympathetic activity also provides excitatory input to the vas deferens, seminal vesicles, and prostate in males, or vaginal glands in females. In contrast, sympathetic activity causes vasoconstriction and loss of erection. The lumbar sympathetic pathway to the sexual organs originates in the thoraco-lumbar segments (T11-L2) and reaches the target organs via the corresponding sympathetic chain ganglia and the inferior mesenteric and pelvic ganglia, as in the case of the autonomic bladder control. The afferent effects of genital stimulation are conveyed centrally from somatic sensory endings via the dorsal roots of S2–S4, eventually reaching the somatic sensory cortex (reflex sexual excitation may also occur by local stimulation, as is evident in paraplegics). The reflex effects of such stimulation are increased parasympathetic activity, which, as noted, causes relaxation of the smooth muscles in the wall of the sinusoids and subsequent erection. Finally, the somatic component of reflex sexual function arises from α-motor neurons in the lumbar and sacral spinal cord segments. These neurons provide excitatory innervation to the bulbocavernosus and ischiocavernosus muscles, which are active during ejaculation in males and mediate the contractions of the perineal (pelvic floor) muscles that accompany orgasm in both male and females. Sexual functions are governed centrally by the anterior-medial and medial-tuberal zones of the hypothalamus, which contain a variety of nuclei pertinent to visceral motor control and reproductive behavior (see Box A ). Although they remain poorly understood, these nuclei act as integrative centers for sexual responses and are also thought to be involved in more complex aspects of sexuality, such as sexual preference and gender identity (see Chapter 30 ). The relevant hypothalamic nuclei receive inputs from several areas of the brain, including—as one might imagine—the cortical and subcortical structures concerned with emotion and memory (sees Chapters 29 and 31 )
Female Reproductive System Sympathetic Innervation The sympathetic preganglionic neurons innervating the smooth muscle of the uterine wall are located in the IML at the T12-L2 level. Their preganglionic fibers pass through the sympathetic chain, exit in the lumbar splanchnic nerves, and synapse on postganglionic neurons in the inferior mesenteric ganglion. The postganglionic fibers from these neurons pass through the hypogastric plexus and innervate the female sexual organ (vagina) and the uterus (Fig. 22-10). Some preganglionic fibers from L1-L2 spinal segments descend in the sympathetic chain and synapse on postganglionic neurons in the hypogastric plexus. The postganglionic fibers from these neurons then innervate the female erectile tissue (clitoris) (Fig. 22-1). Activation of the sympathetic nervous system results in contraction of the uterus. Parasympathetic Innervation The location of the parasympathetic preganglionic neurons and the pathways they follow to innervate the uterus and female sexual organ are similar to those described for the male sexual organ (Fig. 22-10). The mechanism of vasodilation in the female erectile tissue (clitoris) is similar to that described for the male sexual organ. Parasympathetic stimulation causes stimulation of the female erectile tissue and relaxation of the uterine smooth muscle. The relaxation of the uterine smooth muscle may be variable due to hormonal influences on this muscle. The pain-sensing neurons innervating the uterus are located in the dorsal root ganglia at T12-L2 and S2-S4. Their peripheral axons pass through the hypogastric plexus and terminate in the uterus, while their central terminals synapse in the substantia gelatinosa at the T12-L2 and S2-S4 levels. The secondary pain-sensing neurons then project to the cerebral cortex via the thalamus (see Chapter 15).
Medullary, Pontine, and Mesencephalic Control of the Autonomic Nervous System Many neuronal areas in the brain stem reticular substance and along the course of the tractus solitarius of the medulla, pons, and mesencephalon, as well as in many special nuclei (Figure 60–5), control different autonomic functions such as arterial pressure, heart rate, glandular secretion in the gastrointestinal tract, gastrointestinal peristalsis, and degree of contraction of the urinary bladder. Control of each of these is discussed at appropriate points in this text. Suffice it to point out here that the most important factors controlled in the brain stem are arterial pressure, heart rate, and respiratory rate . Indeed, transection of the brain stem above the midpontine level allows basal control of arterial pressure to continue as before but prevents its modulation by higher nervous centers such as the hypothalamus. Conversely, transection immediately below the medulla causes the arterial pressure to fall to less than one-half normal. Closely associated with the cardiovascular regulatory centers in the brain stem are the medullary and pontine centers for regulation of respiration, which are discussed in Chapter 41. Although this is not considered to be an autonomic function, it is one of the involuntary functions of the body. Control of Brain Stem Autonomic Centers by Higher Areas. Signals from the hypothalamus and even from the cerebrum can affect the activities of almost all the brain stem autonomic control centers. For instance, stimulation in appropriate areas mainly of the posterior hypothalamus can activate the medullary cardiovascular control centers strongly enough to increase arterial pressure to more than twice normal. Likewise, other hypothalamic centers control body temperature, increase or decrease salivation and gastrointestinal activity, and cause bladder emptying. To some extent, therefore, the autonomic centers in the brain stem act as relay stations for control activities initiated at higher levels of the brain, especially in the hypothalamus. In Chapters 58 and 59, it is pointed out also that many of our behavioral responses are mediated through (1) the hypothalamus, (2) the reticular areas of the brain stem, and (3) the autonomic nervous system. Indeed, some higher areas of the brain can alter function of the whole autonomic nervous system or of portions of it strongly enough to cause severe autonomic-induced disease such as peptic ulcer of the stomach or duodenum, constipation, heart palpitation, or even heart attack.
The Hypothalamus The hypothalamus is located at the base of the forebrain, bounded by the optic chiasm rostrally and the midbrain tegmentum caudally. It forms the floor and ventral walls of the third ventricle and is continuous through the infundibular stalk with the posterior pituitary gland, as illustrated in figure A. Because of its central position in the brain and its proximity to the pituitary, it is not surprising that the hypothalamus integrates information from the forebrain, brainstem, spinal cord, and various endocrine systems, being particularly important in the central control of visceral motor functions. The hypothalamus comprises a large number of distinct nuclei, each with its own complex pattern of connections and functions. The nuclei, which are intricately interconnected, can be grouped in three longitudinal regions referred to as periventricular , medial , and lateral . They can also be grouped along the anterior—posterior dimension, which are referred to as the anterior (or preoptic), tuberal , and posterior regions (figure B). The anterior periventricular group contains the suprachiasmatic nucleus, which receives direct retinal input and drives circadian rhythms (see Chapter 28 ). More scattered neurons in the periventricular region (located along the wall of the third ventricle) manufacture peptides known as releasing or inhibiting factors that control the secretion of a variety of hormones by the anterior pituitary. The axons of these neurons project to the median eminence, a region at the junction of the hypothalamus and pituitary stalk, where the peptides are secreted into the portal circulation that supplies the anterior pituitary. Nuclei in the anterior-medial region include the paraventricular and supra- optic nuclei, which contain the neurosecretory neurons whose axons extend into the posterior pituitary. With appropriate stimulation, these neurons secrete oxytocin or vasopressin (antidiuretic hormone) directly into the bloodstream. Other neurons in the paraventricular nucleus project to the preganglionic neurons of the sympathetic and parasympathetic divisions in the brainstem and spinal cord. It is these cells that are thought to exert hypothalamic control over the visceral motor system and to modulate the activity of the poorly defined nuclei in the brainstem tegmentum that organize specific autonomic reflexes such as respiration and vomiting. The paraventricular nucleus, like other hypothalamic nuclei, receives inputs from the other hypothalamic zones, which are in turn related to the cortex, hippocampus, amygdala, and other central structures that, as noted in the text, are all capable of influencing visceral motor function. The medial-tuberal region nuclei ( tuberal refers to the tuber cinereum, the anatomical name given to the middle portion of the inferior surface of the hypothalamus) include the dorsomedial and ventromedial nuclei, which are involved in feeding, reproductive and parenting behavior, thermoregulation, and water balance. These nuclei receive inputs from structures of the limbic system, as well as from visceral sensory nuclei in the brainstem (e.g., the nucleus of the solitary tract). Finally, the lateral region of the hypothalamus is really a rostral continuation of the midbrain reticular formation. Thus, the neurons of the lateral region are not grouped into nuclei, but are scattered among the fibers of the medial forebrain bundle, which runs through the lateral hypothalamus. These cells control behavioral arousal and shifts of attention, especially as related to reproductive activities. In summary, the hypothalamus regulates an enormous range of physiological and behavioral activities, including control of body temperature, sexual activity, reproductive endocrinology, and attack-and-defense (aggressive) behavior. It is not surprising, then, that this intricate structure is the key controlling center for visceral motor activity and for homeostatic functions generally. Figure 49-11 The structure of the hypothalamus. A. Frontal view of the hypothalamus (section along the plane shown in part B). B. A medial view shows most of the main nuclei. The hypothalamus is often divided analytically into three areas in a rostocaudal direction: the preoptic area, the tuberal level, and the posterior level. The Hypothalamus Integrates Autonomic and Endocrine Functions With Behavior The hypothalamus plays a particularly important role in regulating the autonomic nervous system and was once referred to as the “head ganglion” of the autonomic nervous system. But recent studies of hypothalamic function have led to a somewhat different view. Whereas early studies found that electrical stimulation or lesions in the hypothalamus can profoundly affect autonomic function, more recent investigations have demonstrated that many of these effects are due to involvement of descending and ascending pathways of the cerebral cortex or the basal forebrain passing through the hypothalamus. Modern studies indicate that the hypothalamus functions to integrate autonomic response and endocrine function with behavior, especially behavior concerned with the basic homeostatic requirements of everyday life. The hypothalamus serves this integrative function by regulation of five basic physiological needs: It controls blood pressure and electrolyte composition by a set of regulatory mechanisms that range from control of drinking and salt appetite to the maintenance of blood osmolality and vasomotor tone. It regulates body temperature by means of activities ranging from control of metabolic thermogenesis to behaviors such as seeking a warmer or cooler environment. It controls energy metabolism by regulating feeding, digestion, and metabolic rate. It regulates reproduction through hormonal control of mating, pregnancy, and lactation. It controls emergency responses to stress, including physical and immunological responses to stress by regulating blood flow to muscle and other tissues and the secretion of adrenal stress hormones. The hypothalamus regulates these basic life processes by recourse to three main mechanisms. First, the hypothalamus has access to sensory information from virtually the entire body. It receives direct inputs from the visceral sensory system and the olfactory system, as well as the retina. The visual inputs are used by the suprachiasmatic nucleus to synchronize the internal clock mechanism to the day-night cycle in the external world (Chapter 3). Visceral somatosensory inputs carrying information about pain are relayed to the hypothalamus from the spinal and trigeminal dorsal horn (Chapters 23 and 24). In addition, the hypothalamus has internal sensory neurons that are responsive to changes in local temperature, osmolality, glucose, and sodium, to name a few examples. Finally, circulating hormones such as angiotensin II and leptin enter the hypothalamus at specialized zones along the margins of the third ventricle called circumventricular organs , where they interact directly with hypothalamic neurons. Second, the hypothalamus compares sensory information with biological set points. It compares, for example, local temperature in the preoptic area to the set point of 37°C and, if the hypothalamus is warm, activates mechanisms for heat dissipation. There are set points for a wide variety of physiological processes, including blood sugar, sodium, osmolality, and hormone levels. Finally, when the hypothalamus detects a deviation from a set point, it adjusts an array of autonomic, endocrine, and behavioral responses to restore homeostasis. If the body is too warm, the hypothalamus shifts blood flow from deep to cutaneous vascular beds and increases sweating, to increase heat loss through the skin. It increases vasopressin secretion, to conserve water for sweating. Meanwhile, the hypothalamus activates coordinated behaviors, such as seeking to change the local ambient temperature or seeking a cooler environment. All of these processes must be precisely coordinated. For example, adjustments in blood flow in different vascular beds are important for such diverse activities as thermoregulation, digestion, response to emergency, and sexual intercourse. In order to do this, the hypothalamus contains an array of specialized cell groups with different functional roles. The Hypothalamus Contains Specialized Groups of Neurons Clustered in Nuclei Although the hypothalamus is very small, occupying only about 4 grams of the total 1400 grams of adult human brain weight, it is packed with a complex array of cell groups and fiber pathways (Figure 49-11). The hypothalamus can be divided into three regions: anterior, middle, and posterior. The most anterior part of the hypothalamus, overlying the optic chiasm, is the preoptic area. The preoptic nuclei, which include the circadian pacemaker (suprachiasmatic nucleus), are mainly concerned with integration of different kinds of sensory information needed to judge deviation from physiological set point. The preoptic area controls blood pressure and composition; cycles of activity, body temperature, and many hormones; and reproductive activity. The middle third of the hypothalamus, overlying the pituitary stalk, contains the dorsomedial, ventromedial, paraventricular, supraoptic, and arcuate nuclei. The paraventricular nucleus includes both magnocellular and parvocellular neuroendocrine components controlling the posterior and anterior pituitary gland. In addition, it contains neurons that innervate both the parasympathetic and sympathetic preganglionic neurons in the medulla and the spinal cord, thus playing a major role also in regulating autonomic responses. The arcuate and periventricular nuclei, along the wall of the third ventricle, like the paraventricular nucleus contain parvocellular neuroendocrine neurons, whereas the supraoptic nucleus contains additional magnocellular neuroendocrine cells. The ventromedial and dorsomedial nuclei project mainly locally within the hypothalamus and to the periaqueductal gray matter, to regulate complex integrative functions such as control of growth, feeding, maturation, and reproduction. Finally, the posterior third of the hypothalamus includes the mammillary body and the overlying posterior hypothalamic area. In addition to the mammillary nuclei, whose function remains enigmatic, this region includes the tuberomammillary nucleus, a histaminergic cell group that is important in regulating wakefulness and arousal. The major nuclei of the hypothalamus are located for the most part in the medial part of the hypothalamus, sandwiched between two major fiber systems. A massive longitudinal fiber pathway, the medial forebrain bundle , runs through the lateral hypothalamus. The medial forebrain bundle connects the hypothalamus with the brain stem below, and with the basal forebrain, amygdala, and cerebral cortex above. Large neurons scattered among the fibers of the medial forebrain bundle provide long-ranging hypothalamic outputs that reach from the cerebral cortex to the sacral spinal cord. They are involved in organizing behaviors as well as autonomic responses. A second, smaller fiber system is located medial to the major hypothalamic nuclei, in the wall of the third ventricle. This periventricular fiber system contains longitudinal fibers that link the hypothalamus to the periaqueductal gray matter in the midbrain. This pathway is thought to be important in activating simple, stereotyped behavioral patterns, such as posturing during sexual behavior. The periventricular system also conveys the axons of the parvocellular neuroendocrine neurons located in the periventricular region, and including the paraventricular and arcuate nuclei, to the median eminence, for control of the anterior pituitary gland. They are met in the median eminence by the axons from the magnocellular neurons, which continue down the pituitary stalk to the posterior pituitary gland.
Inputs from limbic structures. Projections from hippocampal formation through the fornix (shown in red), amygdala through the stria terminalis (shown in green), and septal area through the medial forebrain bundle (shown in blue) to the hypothalamus. Other inputs, such as those from the brainstem, are omitted from this diagram.
Inputs from cerebral cortex. Diagrams illustrate the pathways by which the prefrontal cortex (green) and anterior cingulate gyrus (red) supply the hypothalamus by virtue of relays in the mediodorsal and midline thalamic nuclear groups (blue).
Efferent projections of the mammillary bodies to the anterior thalamic nucleus via the mammillothalamic tract and to the midbrain tegmentum via the mammillotegmental tract.
Major efferent projections of the hypothalamus. Not shown in this illustration are connections from the hypothalamus to the pituitary gland.
Figure 49-12 The hypothalamus controls the pituitary gland both directly and indirectly through hormone-releasing neurons. Peptidergic neurons (5) release oxytocin or vasopressin into the general circulation through the posterior pituitary. Two general types of neurons are involved in regulation of the anterior pituitary. Peptidergic neurons (3, 4) synthesize and release hormones into the hypophyseal-portal circulation. The second type of neuron is the link between the peptidergic neurons and the rest of the brain. These neurons, some of which are monoaminergic, are believed to form synapses with peptidergic neurons either on the cell body (1) or on the axon terminal (2). The Hypothalamus Controls the Endocrine System The hypothalamus controls the endocrine system directly , by secreting neuroendocrine products into the general .979 circulation from the posterior pituitary gland, and indirectly , by secreting regulatory hormones into the local portal circulation, which drains into the blood vessels of the anterior pituitary (Figure 49-12). These regulatory hormones control the synthesis and release of anterior pituitary hormones into the general circulation. The highly fenestrated (perforated) capillaries of the posterior pituitary and median eminence of the hypothalamus facilitate the entry of hormones into the general circulation or the portal plexus. Direct and indirect control form the basis of our modern understanding of hypothalamic control of endocrine activity. Magnocellular Neurons Secrete Oxytocin and Vasopressin Directly From the Posterior Pituitary Large neurons in the paraventricular and supraoptic nuclei, constituting the magnocellular region of the hypothalamus, project to the posterior pituitary gland ( neurohypophysis ). Some of the magnocellular neuroendocrine neurons in the paraventricular and supraoptic nuclei release the neurohypophyseal hormone oxytocin, while others release vasopressin into the general circulation by way of the posterior pituitary (Figure 49-13). These peptides circulate to target organs of the body that control water balance and milk release. Oxytocin and vasopressin are peptides that contain nine amino acid residues (Table 49-2). Like other peptide hormones, they are cleaved from larger prohormones (see Chapter 15). The prohormones are synthesized in the cell body and cleaved within transport vesicles as they travel down the axons. The peptide neurophysin is a cleavage product of the processing of vasopressin and oxytocin and is released along with the hormone in the posterior pituitary. The neurophysin formed in neurons that release vasopressin differs somewhat from that produced in neurons that release oxytocin. Parvocellular Neurons Secrete Peptides That Regulate Release of Anterior Pituitary Hormones Geoffrey Harris proposed in the 1950s that the anterior pituitary gland is regulated indirectly by the hypothalamus. He demonstrated that the hypophysial portal veins, which carry blood from the hypothalamus to the anterior pituitary gland, convey important signals that control anterior pituitary secretion. In the 1970s the structure of a series of peptide hormones that carry these signals was elucidated. These hormones fall into two classes: releasing hormones and release-inhibiting hormones (Table 49-3). Of all the anterior pituitary hormones, only prolactin is under predominantly inhibitory control. Hence transection of the pituitary stalk causes insufficiency of adrenal cortex, thyroid, gonadal, and growth hormones, but increased prolactin secretion. Systematic electrical recordings have not been made from neurons that secrete releasing hormones. However, they are believed to fire in bursts because of the pulsatile nature of secretion of the anterior pituitary hormones, which show periodic surges throughout the day. Episodic firing may be particularly effective for causing hormone release and may limit receptor inactivation. The neurons that make releasing hormones are found mainly along the wall of the third ventricle. The gonadotropin-releasing hormone (GnRH) neurons tend to be located most anteriorly, along the basal part of the third ventricle. Neurons that make somatostatin, corticotropin-releasing hormone (CRH), and dopamine are located more dorsally and are found in the medial part of the paraventricular nucleus. Neurons that make growth hormone-releasing hormone (GRH), thyrotropin-releasing hormone (TRH), GnRH, and dopamine are found in the arcuate nucleus, an expansion of the periventricular gray matter that overlies the median eminence, in the floor of the third ventricle (see Figure 49-10). The median eminence contains a plexus of fine capillary loops. These are fenestrated capillaries, and the terminals of the neurons that contain releasing hormones end on these loops. The blood then flows from the median eminence into a secondary (portal) venous system, which carries it to the anterior pituitary gland (See Figure 49-11). An Overall View The three divisions of the autonomic nervous system comprise an integrated motor system that acts in parallel with the somatic motor system and is responsible for homeostasis. Esential to the functioning of the motor outflow are the visceral sensory afferents that are relayed from the nucleus of the solitary tract through a network of central autonomic control nuclei. The hypothalamus integrates somatic, visceral, and behavioral information from all of these sources, thus coordinating autonomic and endocrine outflow with behavioral state. Several features of the autonomic nervous system permit rapid integrated responses to changes in the environment. The activity of effector organs is finely controlled by coordinated and balanced excitatory and inhibitory inputs from tonically active postganglionic neurons. Moreover, the sympathetic system is greatly divergent, permitting the entire body to respond to extreme conditions In addition to the small molecule neurotransmitters— ACh and norepinephrine—a wide variety of peptides are thought to be released by autonomic neurons either onto postganglionic cells or their targets. Many of these peptides act to alter the efficacy of cholinergic or adrenergic transmission. The autonomic nervous system uses a rich variety of chemical mediators, several of which may commonly coexist in single autonomic neurons. The release of different combinations of chemical mediators from autonomic neurons may represent a means of “chemical coding” of information transfer in the different branches of the autonomic nervous system, although we are still only beginning to learn how to read the code As we shall also see in the following two chapters, the autonomic nervous system is a remarkably adaptable system of homeostatic control. It can function locally through branches of primary sensory fibers that terminate in autonomic ganglia, or intrinsically through the entire nervous system on the functions of the digestive tract. Control centers in the brain stem are involved in several autonomic reflexes. While the hypothalamus integrates behavioral and emotional responses arising from the forebrain with ongoing metabolic need to produce highly coordinated autonomic control and behavior.
Summary of the central control of the visceral motor system. The major organizing center for visceral motor functions is the hypothalamus (see Box A ). Central Control of the Visceral Motor Functions The visceral motor system is regulated in part by circuitry in the cerebral cortex: Involuntary visceral reactions such as blushing in response to consciously embarrassing stimuli, vasoconstriction and pallor in response to fear, and autonomic responses to sexual situations make this plain. Indeed, autonomic function is intimately related to emotional experience and expression, as described in Chapter 29 . In addition, the hippocampus, thalamus, basal ganglia, cerebellum, and reticular formation all influence the visceral motor system. The major center in the control of the visceral motor system, however, is the hypothalamus ( Box A ). The hypothalamic nuclei relevant to visceral motor function project to the nuclei in the brainstem that organize many visceral reflexes (e.g., respiration, vomiting, urination), to the cranial nerve nuclei that contain parasympathetic preganglionic neurons, and to the sympathetic and parasympathetic preganglionic neurons in the spinal cord. The general organization of this central autonomic control is summarized in Figure 21.6 , and some important clinical manifestations of damage to this descending system are illustrated in Box B . Although the hypothalamus is the key structure in the overall organization of visceral function, and in homeostasis generally, the visceral motor systemcontinues to function independently if disease or injury impedes the influence of this controlling center. The major subcortical centers for the ongoing regulation of the autonomic function in the absence of hypothalamic control are a series of poorly understood nuclei in the brainstem tegmentum that organize specific visceral functions such as cardiac reflexes, reflexes that control the bladder, and reflexes related to sexual function, as well as other critical autonomic reflexes such as respiration and vomiting. The afferent information from the viscera that drives these brainstem centers is, as noted already, received by neurons in the nucleus of the solitary tract, which relays these signals to the hypothalamus and to the various autonomic centers in the brainstem tegmentum.
Figure 51-1 Homeostatic processes can be analyzed in terms of control systems. A. A control system regulates a controlled variable. When a feedback signal indicates the controlled variable is below or above the set point an error signal is generated. This signal turns on (or facilitates) appropriate behaviors and physiological responses, and turns off (or suppresses) incompatible responses. An error signal also can be generated by external (incentive) stimuli. B. A negative feedback system without a set point controls fat stores. (Based on data of DiGirolamo and Rudman 1968.) SO FAR IN THIS BOOK OUR discussion of the neural control of behavior has focused on how the brain translates external sensory information about events in the environment into coherent perceptions and motor action. In the final two parts of the book we examine how development and learning profoundly shape the brain's ability to do this. These parts of the book are to a large degree concerned with the cognitive aspects of behavior—what a person knows about the outside world. However, behavior also has non cognitive aspects that reflect not what the individual knows but what he or she needs or wants. Here we are concerned with how individuals respond to internal rather than external stimuli. This is the domain of motivation. Motivation is a catch-all term that refers to a variety of neuronal and physiological factors that initiate, sustain, and direct behavior. These internal factors are thought to explain, in part, variation in the behavior of an individual over time. As discussed earlier in this book, the behaviorists who dominated the study of behavior in the first half of this century largely ignored internal factors in their attempts to explain behavior. With the rise of cognitive psychology a few decades ago the behaviorist paradigm has receded and motivation, with all of its complexity, has become the subject of serious scientific study once again. The biological study of motivation has until quite recently been confined to studies of simple physiological or homeostatic instances of motivation called drive states. For this reason our discussion here focuses primarily on drive states, which are the outcome of homeostatic processes related to hunger, thirst, and temperature regulation. Drive states are characterized by tension and discomfort due to a physiological need followed by relief when the need is satisfied. It is important to recognize, however, that drive states are merely one subtype, perhaps the simplest examples, of the motivational states that direct behavior. In general, motivational states may be broadly classified into two types: (1) elementary drive states and more complex physiological regulatory forces brought into play by alterations in internal physical conditions such as hunger, thirst, and temperature, and (2) personal or social aspirations acquired by experience. Freud and contemporary cognitive psychologists have suggested that both forms, but especially personal and social aspirations, represent a complex interplay between physiological and social forces, and between conscious and unconscious mental processes. The neurobiological study of the second type of motivational states is in its infancy. The issues that surround drive states relate to survival. Activities that enhance immediate survival, such as eating or drinking, or those that ensure long-term survival, such as sexual behavior or caring for offspring, are pleasurable and there is a great natural urge to repeat these behaviors. Drive states steer behavior toward specific positive goals and away from negative ones. In addition, drive states require organization of individual behaviors into a goal-oriented sequence. Attainment of the goal decreases the intensity of the drive state and thus the motivated behavior ceases. A hungry cat is ever alert for the occasional mouse, ready to pounce when it comes into sight. Once satiated, the cat will not pounce again for some time. Finally, drive states have general effects; they increase our general level of arousal and thereby enhance our ability to act. Drive states therefore serve three functions: they direct behavior toward or away from a specific goal; they organize individual behaviors into a coherent, goal-oriented sequence; and they increase general alertness, energizing the individual to act. The drive states that neurobiologists have studied most effectively are those related to temperature regulation, hunger, and thirst. Until recently, these drive states were inferred from behavior alone. But as we learn more about the physiological correlates of drive states, we rely less on traditional psychological concepts of motivation and more on concepts derived from servo-control models applied to living organisms. Admittedly, such an approach reduces drive states to a complex homeostatic reflex that is responsive to multiple stimuli. Some of these stimuli are internal in response to tissue deficits; others are external (eg, the sight or smell of food) and are regulated by excitatory and inhibitory systems. Since regulation of internal states involves the autonomic nervous system and the endocrine system, we shall consider the relationship of motivational states to autonomic and neuroendocrine responses. We first examine how servo-control models have made the study of drive states amenable to biological experimentation. We then examine the regulation of these simple motivational states by factors other than tissue deficits, such as circadian rhythms, ecological constraints, and pleasure. Finally, we discuss the neural systems of the brain concerned with reward or reinforcement, an important component of motivation. These neural systems have been well delineated. Most addictive drugs, such as nicotine, alcohol, opiates, and cocaine, produce their actions by acting on or co-opting the same neural pathways that mediate positively motivated behaviors essential for survival. Drive States Are Simple Cases of Motivational States That Can Be Modeled as Servo-Control Systems Drive states can be understood by analogy with control systems, or servomechanisms , that regulate machines. While specific physiological servomechanisms have not yet been demonstrated directly, the servomechanism model permits us to organize our thinking about the complex operation of homeostasis, and makes it possible to define experimentally the physiological control of homeostasis. This approach has been most successfully applied to temperature regulation. Because body temperature can be readily measured, the mechanism regulating temperature has been studied by examining the relationship between the internal stimulus (temperature) and various external stimuli. This control system approach has been less successful when applied to more complex regulatory behaviors, such as feeding, drinking, and sex, in which the relevant internal stimuli are difficult to identify and measure. Nevertheless, at present, the control systems model is probably the best approach to analyzing even these more complex internal states. Servomechanisms maintain a controlled variable within a certain range. One way of regulating the controlled variable is to measure it by means of a feedback detector and compare the measured value with a desired value, or set point. The comparison is accomplished by an error detector, or integrator , that generates an error signal when the value of the controlled variable does not match the set point. The error signal then drives controlling elements that adjust the controlled system in the desired direction. The error signal is controlled not only by internal feedback stimuli but also by external stimuli. All examples of physiological control seem to involve both inhibitory and excitatory effects, which function together to adjust the control system (Figure 51-1). The control system used to heat a home illustrates these principles. The furnace system is the controlling element. The room temperature is the controlled variable. The home thermostat is the error detector. The setting on the thermostat is the set point. Finally, the output of the thermostat is the error signal that turns the control element on or off.
Figure 51-2 This sagittal section of the human brain illustrates the hypothalamic regions concerned with heat conservation and heat dissipation. Temperature Regulation Involves Integration of Autonomic, Endocrine, and Skeletomotor Responses Temperature regulation nicely fits the model of a control system. Normal body temperature is the set point in the system of temperature regulation. The integrator and many controlling elements for temperature regulation appear to be located in the hypothalamus. Because temperature regulation requires integrated autonomic, endocrine, and skeletomotor responses, the anatomical connections of the hypothalamus make this structure well suited for this task. The feedback detectors collect information about body temperature from two main sources: peripheral temperature receptors located throughout the body (in the skin, spinal cord, and viscera) and central receptors located mainly in the hypothalamus. The detectors of temperature, both low and high, are located only in the anterior hypothalamus. The hypothalamic receptors are probably neurons whose firing rate is highly dependent on local temperature, which in turn is importantly affected by the temperature of the blood. Although the anterior hypothalamic area is involved in temperature sensing, control of body temperature appears to be regulated by separate regions of the hypothalamus. The anterior hypothalamus mediates decreases and the posterior hypothalamus (preoptic area) mediates increases in body temperature. Thus, electrical stimulation of the anterior hypothalamus causes dilation of blood vessels in the skin, panting, and a suppression of shivering, responses that decrease body temperature. In contrast, electrical stimulation of the posterior hypothalamus produces an opposing set of responses that generate or conserve heat (Figure 51-2). As with fear responses, which are evoked by electrical stimulation of the hypothalamus (Chapter 50), temperature regulatory responses evoked by electrical stimulation also include appropriate nonvoluntary responses involving the skeletomotor system. For example, stimulation of the anterior hypothalamus (preoptic area) produces panting, while stimulation of the posterior hypothalamus produces shivering. Ablation experiments corroborate the critical role of the hypothalamus in regulating temperature. Lesions of the anterior hypothalamus cause chronic hyperthermia and eliminate the major responses that normally dissipate excess heat. Lesions in the posterior hypothalamus have relatively little effect if the animal is kept at room temperature (approximately 22°C). If the animal is exposed to cold, however, it quickly becomes hypothermic because the homeostatic mechanisms fail to generate and conserve heat.
Figure 51-3 Peripheral and central information on temperature is summated in the hypothalamus. Changes in either room temperature or local hypothalamic temperature alter the response rate of rats trained to press a button to receive a brief burst of cool air. When the room temperature is increased, thus presumably increasing skin temperature, the response rate increases roughly in proportion to the temperature increase (points a and b). If the temperature of the hypothalamus is also increased (by perfusing warm water through a hollow probe), the response rate reflects a summation of information on skin temperature and hypothalamic temperature (points c and d). If the skin temperature remains high enough but the hypothalamus is cooled, the response rate decreases or is suppressed altogether (point e). (From data of Corbit 1973 and Satinoff 1964.) The hypothalamus also controls endocrine responses to temperature challenges. Thus, long-term exposure to cold can enhance the release of thyroxine, which increases body heat by increasing tissue metabolism. In addition to driving appropriate autonomic, endocrine, and nonvoluntary skeletal responses, the error signal of the temperature control system can also drive voluntary behaviors that minimize the error signal. For example, a rat can be taught to press a button to receive puffs of cool air in a hot environment. After training, when the chamber is at normal room temperature, the rat will not press the button for cool air. If the anterior hypothalamus is locally warmed by perfusing it with warm water through a hollow probe, the rat will run to the cool-air button and press it repeatedly. Hypothalamic integration of peripheral and central inputs can be demonstrated by heating the environment (and thereby the skin of the animal) and concurrently cooling or heating the hypothalamus. When both the environment and hypothalamus are heated, the rat presses the cool-air button faster than when either one is heated alone. However, even in a hot environment the pressing of the button for cool air can be suppressed completely by cooling the hypothalamus (Figure 51-3). Recordings from neurons in the preoptic area and the anterior hypothalamus support the idea that the hypothalamus integrates peripheral and central information relevant to temperature regulation. Neurons in this region, called warm-sensitive neurons , increase their firing when the local hypothalamic tissue is warmed. Other neurons, called cold-sensitive neurons , respond to local cooling. The warm-sensitive neurons, in addition to responding to local warming of the brain, are usually excited by warming the skin or spinal cord and are inhibited by cooling the skin or spinal cord. The cold-sensitive neurons exhibit the opposite behavior. Thus, these neurons could integrate the thermal information from peripheral receptors with that from neurons within the brain. Furthermore, many temperature-sensitive neurons in the hypothalamus also respond to nonthermal stimuli, such as osmolarity, glucose, sex steroids, and blood pressure. In humans the set point of the temperature control system is approximately 98.6°F (37°C), although it normally varies somewhat diurnally, decreasing to a minimum during sleep. The set point can be altered by pathological states, for example by the action of pyrogens, which induce fever. Systemic pyrogens, such as the macrophage product interleukin-1, enter the brain at regions in which the blood-brain barrier is incomplete, such as the preoptic area, and act there to increase the set point. The body temperature then rises until the new set point is reached. When this occurs a part of the brain known as the antipyretic area is activated and limits the magnitude of the fever. The antipyretic area includes the septal nuclei, which are located anterior to the hypothalamic preoptic areas, near the anterior commissure. The antipyretic area is innervated by neurons that use the peptide vasopressin as transmitter. Injection of vasopressin into the septal area counteracts fever in a manner similar to that of antipyretic drugs, such as aspirin and indomethacin, suggesting that some of the effects of these drugs are mediated by the central release of vasopressin. The antipyretic action of aspirin and indomethacin is blocked by injection into the septal nuclei of a vasopressin antagonist. In fact, convulsions brought on by high fevers may in part be evoked by vasopressin released in the brain as part of the antipyretic response. The control of body temperature is a clear example of the integrative action of the hypothalamus in regulating autonomic, endocrine, and drive-state control. It illustrates how the hypothalamus operates both directly on the internal environment and indirectly, by providing information about the internal environment to higher neural systems.
Figure 51-4 Animals tend to adjust their food intake to achieve a normal body weight. The plots show a schematized growth curve for a group of rats. At arrow 1 one-third of the animals were maintained on their normal diet (curve b), one-third were force-fed (curve a), and one-third were placed on a restricted diet (curve c). At arrow 2 all rats were placed on a normal (ad libitum) diet. The force-fed animals lost weight and the starved animals gained weight until the mean weight of the two groups approached that of the normal growth curve (b). (Adapted from Keesey et al. 1976.) Chapter 51 / Motivational and Addictive States Feeding Behavior Is Regulated by a Variety of Mechanisms Like temperature regulation, feeding behavior may also be analyzed as a control system, although at every level of analysis the understanding of feeding is less complete. One reason for thinking that feeding behavior is subject to a control system is that body weight seems to be regulated by a set point. Humans often maintain the same body weight for many years. Since even a small increase or decrease of daily caloric intake could eventually result in a substantial weight change, the body must be governed by feedback signals that control nutrient intake and metabolism. Control of nutrient intake is seen most clearly in animals in which body weight is altered from the set point by food deprivation or force-feeding. In both instances the animal will adjust its subsequent food intake (either up or down) until it regains a weight appropriate for its age (Figure 51-4). Animals are thus said to defend their body weight against perturbations. Whereas body temperature is remarkably similar from one individual to another, body weight varies greatly. Furthermore, the apparent set point of an individual can vary with stress, palatability of the food, exercise, and many other environmental and genetic factors. One possible explanation for this difference between regulation of temperature and body weight is that the set point for body weight can itself be changed by a variety of factors. Another possibility is that body weight is regulated by a control system that has no formal set-point mechanism but which nevertheless functions as if there were a set point (Figure 51-1B).
Figure 51-5 The set point for body weight appears to be altered by lesions of the lateral hypothalamus. Three groups of rats were used in this experiment. The control group was maintained on a normal diet. On day 0 the animals of the other two groups received small lesions in the lateral hypothalamus. One of these groups had been maintained on a normal diet; the other group had been starved before the lesion and consequently had lost body weight. After the lesion all animals were given free access to food. The lesioned animals that had not been prestarved initially decreased their food intake and lost body weight, while those that were prestarved rapidly gained weight until they reached the level of the other lesioned animals. (Adapted from Keesey et al. 1976.) Dual Controlling Elements in the Hypothalamus Contribute to the Control of Food Intake Food intake is thought to be under the control of two regions in the hypothalamus: a ventromedial region and a lateral region. In 1942 Albert W. Hetherington and Stephen Walter Ranson discovered that destruction of the ventromedial nuclei of the hypothalamus produces overeating ( hyperphagia ) and severe obesity. In contrast, bilateral lesions of the lateral hypothalamus produce severe neglect of eating ( aphagia ) so that the animal dies unless force-fed. Electrical stimulation produces the opposite effects of lesions. Whereas stimulation of the ventromedial region suppresses feeding, stimulation of the lateral hypothalamus elicits feeding. These observations were originally interpreted to mean that the lateral hypothalamus contains a feeding center and the medial hypothalamus a satiety center. This conclusion was reinforced by studies showing that chemical stimulation of these parts of the hypothalamus can also alter feeding behavior. This conceptually attractive conclusion proved faulty, however, as it became clear that the brain is not organized into discrete centers that by themselves control specific functions. Rather, as with perception and action, the neural circuits mediating homeostatic functions such as feeding are distributed among several structures in the brain. The effects of lateral or medial hypothalamic lesions on feeding are thought to be due in part to dysfunctions that result from damage to other structures. Three factors are particularly important: (1) alteration of sensory information, (2) alteration of set point, and (3) interference with behavioral arousal because of damage to dopaminergic fibers of passage. First, lateral hypothalamic lesions sometimes result in sensory and motor deficits as a result of the destruction of fibers of the trigeminal system and the dopaminergic fibers of the medial forebrain bundle. The sensory loss can contribute to the loss of feeding as well as to the so-called sensory neglect seen after lateral hypothalamic lesions. Thus, a unilateral lesion of the lateral hypothalamus results in loss of orienting responses to visual, olfactory, and somatic sensory stimuli presented contralateral to the lesion. Similarly, feeding responses to food presented contralaterally are also diminished. It is not clear whether this sensory neglect is due to disruption of sensory systems or to interference with motor systems directing responses contralateral to the lesion. Altered sensory responses are also seen in animals with lesions in the region of the ventromedial nucleus. These animals have heightened responses to the aversive or attractive properties of food and other stimuli. On a normal diet they eat more than do animals without lesions. Since the reduction in eating is similar to that seen in normal animals that are made obese by force-feeding, the enhanced sensory responsiveness to food of animals with ventromedial hypothalamic lesions is, at least in part, a consequence rather than a cause of the obesity. This interpretation is supported by Stanley Schachter's finding that some obese humans with no evidence of damage to the region of the ventromedial hypothalamus are also unusually responsive to the taste of food. Second, Shypothalamic lesions may alter the set point for regulating body weight. Rats that were starved to reduce their weight before a small lesion was made in the lateral hypothalamus ate more than normal amounts and gained weight when they resumed eating, whereas the controls (nonstarved) lost weight (Figure 51-5). The starvation apparently brings the weight of these animals below the set point determined by the lateral lesion. Conversely, animals that were force-fed before ventromedial hypothalamic lesions did not overeat, which they would have done if they had not been previously force-fed. Third, lesions of the lateral hypothalamus can damage dopaminergic fibers that course from the substantia nigra to the striatum via the medial forebrain bundle as well as those that emanate from the ventral tegmental area (the mesolimbic projections) and innervate structures associated with the limbic system (the prefrontal cortex, amygdala, and nucleus accumbens; see Chapter 45). When nigrostriatal dopaminergic fibers are experimentally sectioned bilaterally below or above the level of the hypothalamus or are destroyed by a specific toxin 6-OH dopamine, animals exhibit a hypoarousal state and life-threatening aphagia similar to that observed after lateral hypothalamic lesions. The loss of dopamine does not account entirely for the lateral hypothalamus syndrome. The physiological profile and recovery of eating patterns are different after lesions of the lateral hypothalamus and depletion of dopamine, demonstrating that both the dopamine system and hypothalamic substrates contribute to the control of feeding. Lesioning of dopaminergic neurons alone or loss of the neurons of the lateral hypothalamus alone (using the excitotoxins kainic or ibotenic acid) produces less severe behavioral deficits than those seen after the classical lateral hypothalamus lesions. The combined loss of lateral hypothalamic neurons and dopaminergic fibers results in the classical syndrome by impairing both the substrate for monitoring physiological feedback and the neural systems that generate appropriate behavior. In fact, the dopamine agonist apomorphine restores eating and drinking responses to physiological challenges in rats after depletion of dopamine, but not in rats with lateral hypothalamic lesions. Below we shall examine the role of dopamine in food reward and reinforcement more generally when we consider studies of intracranial self-stimulation, the effect of dopamine-blocking drugs on learned behavior to obtain food, and the reinforcing effects of drugs of addiction. Some of the strongest evidence implicating the hypothalamus in the control of feeding comes from studies showing that a wide spectrum of transmitters produces profound alterations of feeding behavior when injected into the lateral hypothalamus and the area of the paraventricular nuclei. These studies also illustrate that different chemical systems are involved in the control of different classes of nutrients. Application of norepinephrine to the paraventricular nucleus greatly stimulates feeding; but, if given a choice, animals will eat more carbohydrate than protein or fat. In contrast, application of the peptide galanin selectively increases ingestion of fat whereas opiates enhance consumption of protein.
Figure 51-6 Hypothetical model of the mechanisms that regulate energy balance in mammals. (Adapted from Hervey 1969.) Food Intake Is Controlled by Short-Term and Long-Term Cues What cues does an organism use to regulate feeding? Two main cues for hunger have been identified: short-term cues that regulate the size of individual meals and long-term cues that regulate overall body weight (Figure 51-6). Short-term cues consist primarily of chemical properties of the food that act in the mouth to stimulate feeding behavior and in the gastrointestinal system and liver to inhibit feeding. The short-term satiety signals impinge on the hypothalamus through visceral afferent pathways, communicating primarily with the lateral hypothalamic regions. The effectiveness of short-term cues is modulated by long-term signals that reflect body weight. As we shall discuss in greater detail below, one such important signal is the peptide leptin, which is secreted from fat storage cells ( adipocytes ). By means of this signal, body weight is kept reasonably constant over a broad range of activity and diet. Daily energy expenditure is remarkably consistent when expressed as a function of body size (Figure 51-7A). Body weight is also maintained at a set level by self-regulating feedback mechanisms that adjust metabolic rate when the organism drifts away from its characteristic set point (Figure 51-7B). An animal maintained on a reduced-calorie diet eventually needs less food to maintain its weight because its metabolic rate decreases. Several humoral signals are thought to be important for short-term regulation of feeding behavior. The hypothalamus has glucoreceptors that respond to blood glucose levels. This system probably stimulates feeding behavior (in contrast to autonomic responses to blood glucose) primarily during emergency states in which blood glucose falls drastically. In addition, gut hormones released during a meal may contribute to satiety. Considerable evidence for such a humoral short-term signal comes from studies of the peptide cholecystokinin. Cholecystokinin is released from the duodenum and upper intestine when amino acids and fatty acids are present in the tract. Cholecystokinin released in the gut acts on visceral afferents that affect brain stem and hypothalamic areas, which are themselves sensitive to cholecystokinin. Injection into the ventricles or specifically into the paraventricular nucleus of small quantities of cholecystokinin and several other peptides (including neurotensin, calcitonin, and glucagon) also inhibits feeding. Therefore, chole-cystokinin released as a neuropeptide in the brain may also inhibit feeding, independently of its release from the gut Cholecystokinin is an example of a hormone or neuromodulator that has independent central and peripheral actions that are functionally related. Other examples include luteinizing hormone-releasing hormone (sexual behavior), adrenocorticotropin (stress and avoidance behavior), and angiotensin (responses to hemorrhage). The use of the same chemical signal for related central and peripheral functions is widespread in both vertebrates and invertebrates. Certain invertebrates, such as the sea slug Aplysia , have specific serotonergic neurons that both enhance feeding responses (by acting directly on muscles involved in consuming food) and promote arousal (by enhancing the excitability of central motor neurons that innervate these same muscles).
The brain integrates multiple peripheral and neural signals to control the regulation of energy homeostasis, maintaining a balance between food intake and energy expenditure. Peripheral factors indicative of long-term energy status are produced by adipose tissue (leptin, adiponectin) and the pancreas (insulin), whereas the acute hunger signal ghrelin (produced in the stomach), and satiety signals such as the gut hormones peptide YY(3–36) (PYY(3–36)), pancreatic polypeptide (PP), amylin and oxyntomodulin (OXM) indicate near-term energy status. The incretin hormones glucagon-like peptide-1 (GLP-1), glucose-dependent insulinotropic peptide (GIP), and potentially OXM improve the response of the endocrine pancreas to absorbed nutrients. Further feedback is provided by nutrient receptors in the upper small bowel, and neural signals indicating distention of the stomach's stretch receptors, which are primarily conveyed by the vagal afferent and sympathetic nerves to the nucleus of the solitary tract (NTS) in the brain stem. The arcuate nucleus (ARC) of the hypothalamus, which is located between the third ventricle and the median eminence, integrates these energy homeostatic feedback mechanisms. It accesses the short- and long-term hormonal and nutrient signals from the periphery via semi-permeable capillaries in the underlying median eminence, and receives neuronal feedback from the NTS. These collated signals act on two distinct subsets of neurons that control food intake in the ARC, which act as an accelerator and a brake respectively. The first subset co-expresses the orexigenic (appetite stimulating) agouti-related peptide (AgRP) and neuropeptide Y (NPY) neurotransmitters, acting as an accelerator in the brain to stimulate feeding. The other neuronal population releases the anorexigenic cocaine- and amphetamine -regulated transcript (CART) and pro-opiomelanocortin (POMC) neurotransmitters, both of which inhibit feeding. Both neuronal populations innervate the paraventricular nucleus (PVN), which, in turn, sends signals to other areas of the brain. These include hypothalamic areas such as the ventromedial nucleus, dorsomedial nucleus and the lateral hypothalamic area, which modulate this control system. Neural brain circuits integrate information from the NTS and multiple hypothalamic nuclei to regulate overall body homeostasis.
Drinking Is Regulated by Tissue Osmolality and Vascular Volume The hypothalamus regulates water balance by its control of hormones, such as antidiuretic hormone. The hypothalamus also regulates aspects of drinking behavior. Unlike feeding, where intake is critical, the amount of water taken in is relatively unimportant as long as the minimum requirement is met. Within broad limits, excess intake is readily eliminated by the kidney. Nevertheless, a set point, or ideal amount of water intake, appears to exist, since too much or too little drinking represents inefficient behavior. If an animal takes in too little liquid at one time, it must soon interrupt other activities and resume its liquid intake to avoid underhydration. Likewise, drinking a large amount at one time results in unneeded time spent drinking as well as urinating to eliminate the excess fluid. Drinking is controlled by two main physiological variables: tissue osmolality and vascular (fluid) volume. This has led Alan Epstein to propose that the principal inputs controlling thirst arise when both physiological variables are depleted ( double depletion hypothesis ). Signals related to the variables reach mechanisms in the brain that control drinking either through afferent fibers from peripheral receptors or by humoral actions on receptors in the brain itself. These inputs control the physiological mechanisms of water conservation in such a way that fluid intake is coordinated with the control of fluid loss so as to maintain water balance. Thus, the hypothalamus integrates hormonal and osmotic cues sensing cell volume and the state of the extracellular space. The volume of water in the intracellular compartment is normally approximately double that of the extracellular space. This delicate balance is determined by the osmotic equilibrium between the compartments, which in turn is determined by extracellular sodium. The control of sodium is therefore a key element in the homeostatic mechanism regulating thirst. The two drives, thirst and salt appetite, appear to be handled by separate but interrelated mechanisms. Drinking also can be controlled by dryness of the tongue. Hyperthermia, detected at least in part by thermosensitive neurons in the anterior hypothalamus, may also contribute to thirst. The feedback signals for water regulation derive from many sources. Osmotic stimuli can act directly on osmoreceptor cells (or receptors that sense the level of Na+), probably neurons, in the hypothalamus. The feedback signals for vascular volume are located in the low-pressure side of the circulation—the right atrium and adjacent walls of the great veins—and large volume changes may also affect arterial baroreceptors in the aortic arch and carotid sinus. Signals from these sources can initiate drinking. Low blood volume, as well as other conditions that decrease body sodium, also results in increased renin secretion from the kidney. Renin, a proteolytic enzyme, cleaves plasma angiotensinogen into angiotensin I, which is then hydrolyzed to the highly active octapeptide angiotensin II. Angiotensin II elicits drinking as well as three other physiological actions that compensate for water loss: vasoconstriction, increased release of aldosterone, and increased release of vasopressin For blood-borne angiotensin to affect behavior it must pass through the blood-brain barrier at specialized regions of the brain. The subfornical organ is a small neuronal structure that extends into the third ventricle and has fenestrated capillaries that readily permit the passage of blood-borne molecules (see Appendix B on the blood-brain barrier). The subfornical organ is sensitive to low concentrations of angiotensin II in the blood, and this information is conveyed to the hypothalamus by a neural pathway between the subfornical organ and the preoptic area. Neurons in this pathway in turn use an angiotensin-like molecule as a transmitter. Thus the same molecule regulates drinking by functioning as a hormone and a neurotransmitter. The preoptic area also receives information from baroreceptors throughout the body. This information is conveyed to various brain structures that initiate a search for water and drinking. Information from baroreceptors is also sent to the paraventricular nucleus, which mediates the release of vasopressin, which in turn regulates water retention. The signals that terminate drinking are less well understood than those that initiate drinking. It is clear, however, that the termination signal is not always merely the absence of the initiating signal. This principle holds for many examples of physiological and behavioral regulation, including feeding. Thus, for example, drinking initiated by low vascular fluid volume (eg, after severe hemorrhage) terminates well before the deficit is rectified. This is highly adaptive since it prevents water intoxication from excessive dilution of extracellular fluids and seems to prevent overhydration that could result from absorption of fluid in the alimentary system long after the cessation of drinking.
Dual Controlling Elements in the Hypothalamus Contribute to the Control of Food Intake <ul><li>Lateral Hypothalamus </li></ul><ul><ul><li>Feeding center </li></ul></ul><ul><li>Medial Hypothalamus </li></ul><ul><ul><li>Satiety center </li></ul></ul>
Food Intake Is Controlled by Short-Term and Long-Term Cues | http://www.slideshare.net/drpsdeb/11b-autonaumic-ns |