score
float64 4
5.34
| text
stringlengths 256
572k
| url
stringlengths 15
373
|
---|---|---|
4.03125 | AUSTIN, TexasA dynamic way to alter the shape and size of microscopic three-dimensional structures built out of proteins has been developed by biological chemist Jason Shear and his former graduate student Bryan Kaehr at The University of Texas at Austin.
Shear and Kaehr fabricated a variety of detailed three-dimensional microstructures, known as hydrogels, and have shown that they can expand and bend the hydrogels by altering the chemistry of the environment in which they were built.
Hydrogels have been in development over the last couple of decades and are being used as parts in biology-based microdevices and medical diagnostic technologies, for drug delivery, and in tissue engineering. But the future utility of these "smart materials" relies on finding better ways to control their conformation.
Shear and Kaehr's work lays the foundation for more precise control of hydrogels. Among many applications, Shear says they will have the ability to better grow bacteria with the aim of understanding disease.
"This provides a significant new way of interacting with cultured cells," says Shear, an associate professor of chemistry and biochemistry. "The microstructures can be used to capture individual cells, and once isolated, clonal colonies of those cells can be grown and studied."
Their research appears in a paper published July 1 in Proceedings of the National Academy of Sciences.
As a proof of concept, the researchers built a rectangular house-like structure with a roof in which they trapped and then released E. coli bacteria. The bacteria blundered into the house through a funnel shaped door, where they found themselves trapped in a ring-shaped chamber. The funnel made it difficult to get out of the house.
Once inside, "they moved around the space like they were running around a racetrack," says Shear.
When the researchers increased the pH of the cell culture, the chamber changed volume, causing the house to pop off its foundation and release the bacteria.
By increasing or decreasing the volume of microstructures dynamically, Shear hopes to be able to better understand a phenomenon known as quorum sensing, where bacteria coordinate their gene expression according to the density of their population. Quorum sensing is important in the pathology of some disease-causing bacteria, such as Pseudomonas aeruginosa.
The hydrogels created by Shear and Kaehr are made of protein molecules that have been chemically bound together using a focused laser beam, a process known as photofabrication.
The laser causes amino acid side chains to link en masse and this builds a solid protein matrix. The protein scaffold is built layer by layer, much like a raster scanner.
"It's a little bit like a three-dimensional Etch-a-Sketch," says Shear.
Other high resolution structures the researchers developed include tethers that connect microspheres to surfaces, flower- and fern-like structures, and micro-hands that are less than a quarter the diameter of a hair, pinky to thumb.
Experimenting with various chemical changes, Shear and Kaehr show that changing pH caused hydrogel bands to bow out at specific points along their length and caused other shapes, like the micro-hands and bacterial chamber, to expand.
Altering ion concentrations caused the fern-like structures to coil and unfurl like fiddleheads emerging from the ground in spring. Adding ions caused contraction of the tether holding the microsphere.
Structures such as these could be used to create better micro- and nano-valves, motors and optics.
Shear says a great advantage of the hydrogels is that they are well suited for controlling and growing cells dynamically and in the environments in which they live.
Waste from the cells can move out of the structures and nutrients and other chemicals, including those added by the researchers to manipulate the cells' biology, can move in. Other microfabrication materials, such as glass, do not have such permeability.
|Contact: Jason Shear|
University of Texas at Austin | http://www.bio-medicine.org/biology-news-1/Smart-materials-get-smarter-with-ability-to-better-control-shape-and-size-3886-1/ |
4.34375 | - Census Materials for Schools - Lessons plans for various grades using census data.
- Get the Math - A series of video segments and tools support student learning of algebra concepts related to music, fashion, and videogames.
- Imagine the Universe Lesson Plans - This site offers math/science based lesson plans.
- Math and Reading Help - Articles on creating good lesson plans.
- Microgravity Lesson Plans - These lesson plans use math concepts to explore microgravity.
- Money Math: Lessons for Life - Lesson plans for middle school math classes, using real-life examples from personal finance.
- NOAA Lesson Plan Library - These lessons are correlated to National Science Education Standards and the Ocean Literacy Essential Principles and Fundamental Concepts. The lessons are designed to supplement existing curricula at the middle and high school levels.
- Teach K-12 Engineering - Find a variety of tools to boost your students' math and science skills and enliven the classroom with engineering projects.
- Youth Education: Hitting the Fundamentals - Math Programs - Empower your students in a fun, engaging way to think of mathematics. | https://kids.usa.gov/teachers/lesson-plans/math/index.shtml |
4.25 | Salt in the ocean comes from rocks on land.
The rain that falls on the land contains some dissolved carbon dioxide from the surrounding air. This causes the rainwater to be slightly acidic due to carbonic acid (which forms from carbon dioxide and water).
As the rain erodes the rock, acids in the rainwater break down the rock. This process creates ions, or electrically charged atomic particles. These ions are carried away in runoff to streams and rivers and, ultimately, to the ocean. Many of the dissolved ions are used by organisms in the ocean and are removed from the water. Others are not used up and are left for long periods of time where their concentrations increase over time.
Two of the most prevalant ions in seawater are chloride and sodium. Together, they make up over 90 percent of all dissolved ions in the ocean. Sodium and Chloride are 'salty.'
The concentration of salt in seawater (salinity) is about 35 parts per thousand. Stated in another way, about 3.5 percent of the weight of seawater comes from the dissolved salts; in a cubic mile of seawater, the weight of the salt (in the form of sodium chloride) would be about 120 million tons.
By some estimates, if the salt in the ocean could be removed and spread evenly over the Earth’s land surface it would form a layer more than 500 feet thick, about the height of a 40-story office building. | http://oceanservice.noaa.gov/facts/whysalty.html |
4.1875 | Classical Latin, distinguished by its formality and elegance, was greatly influenced in vocabulary, grammar, and style by Greek. By the end of the Roman Republic (1st cent. B.C.) classical Latin had become a suitable medium for the greatest poetry and prose of the day. Grammatically, classical Latin featured five declensions and six cases in its inflection of the noun; there was no definite article. Noun subclassifications included three genders (masculine, feminine, and neuter) and two numbers (singular and plural). Verb inflection was highly developed, expressing tense, mood, voice, person, and number. Latin is written in the Roman alphabet, which was apparently derived from the Etruscan alphabet. The latter, in turn, was adapted from the Greek alphabet (see Greek language). | http://www.factmonster.com/encyclopedia/society/latin-language-classical-latin.html |
4.15625 | 1935: The feasibility of radar is demonstrated for the British Air Ministry. It would prove to be a just-in-the-nick-of-time apparatus that helped save Great Britain from defeat in World War II.
Radar (for RAdio Detection And Ranging) was developed over the years with input from many sources, but it was Robert Watson-Watt, a Scottish physicist looking for a reliable method to help airmen locate and avoid approaching thunderstorms, who designed the first set put into practical use. Watson-Watt realized, as he perfected his device, that radio waves could be used to detect more than storms.
A Royal Air Force Heyforth bomber was used for the War Ministry demonstration at Daventry. Three times the plane passed overhead and three times the main beam of a BBC short-wave radio transmitter picked up reflected signals.
This article has been reproduced in a new format and may be missing content or contain faulty links. Contact [email protected] to report an issue.
Impressed, the air ministers embraced the new technology and by September 1939, when war broke out in Europe, the British had a network of radar installations covering the English Channel and North Sea coasts.
It was radar, even more than the pluck of the dashing RAF pilots, that tipped the scales in England's favor in the Battle of Britain.
Hitler's strategic aerial onslaught, meant to clear the skies over the Channel and southeastern England preparatory to an invasion of the British Isles, might have succeeded if not for radar. The RAF was outnumbered by the Luftwaffe, and radar saved already-stretched Fighter Command from having to maintain constant air surveillance.
With radar providing an early-warning system, well-rested RAF pilots could be scrambled and rising to meet the incoming enemy formations in a matter of minutes. As the German fighters ran low on fuel and were forced to turn back, the Spitfires and Hurricanes could pick off the German bombers as they moved deeper into England.
The battle peaked during September and October 1940. The Germans, discouraged by their tactical errors and high losses, gradually tapered off their attacks, then abandoned them altogether when Hitler turned his attention to Russia.
An interesting historical footnote: Although radar was introduced to warfare by the British, the Germans developed their own version and used it effectively during the Allied air raids over occupied Europe and the Reich. | http://www.wired.com/2008/02/dayintech-0226/ |
4.125 | This Bible History Daily feature was originally published in June 2014. It has been updated.—Ed.
Masada—for many, the name evokes the image of a cliff rising dramatically above an austere desert landscape. The name is famously associated with the Masada siege, the final stand between the Jewish rebels and the relentless Roman army at the end of the First Jewish Revolt in 73/74 C.E. Trapped in the desert fortress-palace Herod built in the previous century, the rebels chose—as Jewish historian Josephus tells us—to commit mass suicide rather than be captured and enslaved by the Romans.
This final scene in the siege of Masada has been celebrated and immortalized as an act of heroic resistance on the part of the Jewish rebels. But what do we know about the Roman siege itself? In “The Masada Siege—From the Roman Viewpoint” in the July/August 2014 issue of Biblical Archaeology Review, Gwyn Davies examines the assault from the Roman perspective.
After the fall of Jerusalem in 70 C.E., the Romans turned their attention to stamping out the last of the rebels holding out at the fortresses of Herodium and Machaerus as well as in the “Forest of Jardes” (which has not yet been identified). The last remaining site occupied by the Jewish rebels was at Herod’s desert fortress-palace on the cliff-top of Masada.
Led by Roman general Flavius Silva, the Legio X Fretensis—a veteran military unit—began the siege operation against the rebels in 72 or 73 C.E.Archaeological investigations of the Roman siege works at Masada have been much more limited in scope than those conducted on the cliff-top fortress. According to author Gwyn Davies, we must therefore consider both the account given by Josephus and the surviving archaeological evidence in order to reconstruct what happened in the Masada siege.
The Roman army began their assault, as described by Josephus, by throwing up “a wall all around the fortress to make it difficult for any of the besieged to escape, and posted sentinels to guard it” (The Jewish War VII.276). Archaeological investigations reveal that a 2.5-mile circumvallation wall ringed the area around the desert fortress. The wall, composed of rough stone blocks with a rubble core, measured more than 5 feet wide and 10 feet high. Fifteen towers lined the eastern and northern stretches of the circumvallation wall, while eight camps laid down around the wall served as bases and garrison points for the troops.
The most conspicuous surviving evidence of the Roman siege of Masada is the great assault ramp on the western slope of the cliff. The Romans constructed on a natural spur (which Josephus calls the “Leuke,” or “white promontory”) that abuts the mountain a ramp composed of stone and earth reinforced with timber bracings. Josephus tells us that an ironclad siege tower housing a battering ram was hoisted up the ramp and placed into position to strike against the rebels’ casemate wall. Indeed, the location of the breached defense wall lies directly above the modern summit of the ramp. Furthermore, the distribution of stone ballista projectiles discovered within the desert fortress suggests that they were fired from catapults mounted on a siege tower. Setting fire to the wood-and-earth defense wall, the Romans at last made it to the top of Masada.
For a deeper probe into how the Romans waged both literal and psychological warfare on the besieged rebels, read the full article “The Masada Siege—From the Roman Viewpoint” by Gwyn Davies in the July/August 2014 issue of Biblical Archaeology Review.
BAS Library Members: Read the full article “The Masada Siege—From the Roman Viewpoint” by Gwyn Davies as it appears in the July/August 2014 issue of Biblical Archaeology Review.
Not a BAS Library member yet? Join the BAS Library today.
This Bible History Daily feature was originally published on June 13, 2014. | http://www.biblicalarchaeology.org/daily/biblical-sites-places/biblical-archaeology-sites/the-masada-siege/ |
4.21875 | Monitoring Volcano Seismicity Provides Insight to Volcanic Structure
Moving magma and volcanic fluids trigger earthquakes.
Many processes in and around volcanoes can generate earthquakes. Most of the time, these processes are faulting and fracturing that does not lead to an eruption. However, volcanic earthquakes do occur as magma and volcanic gases rise to the surface from depth, which involves significant stress changes in the crust as the material migrates upward.
Volcano seismologists study several types of seismic events to better understand how magma and gases move towards the surface:
Volcano-tectonic (VT)earthquakes represent brittle failure of rock, the same process that occurs along purely "tectonic" faults such as the San Andreas Fault. At volcanoes, VT events can occur due to "normal" tectonic forces, changing stresses caused by moving magma, and movement of fluids through pre-existing cracks. Distinguishing between these various processes can be tricky and often requires data from other disciplines (geodesy, hydrology, gas geochemistry, and geology) to work out what's going on.
Long-period (LP) or low-frequency (LF) earthquakes are caused by cracks resonating as magma and gases move toward the surface. They are often seen prior to volcanic eruptions, but their occurrence is also part of the normal background seismicity at some volcanoes and their occurrence does not necessarily indicate that an eruption is imminent. LF events can also be produced by non-magmatic processes, most notably glacier movement.
Tremor is a continuous high-amplitude seismic signal that can be caused by multiple processes, including long-lived resonance due to extended flow of magma movement through cracks, continuous occurrence of VT or LP/LF events that are so closely spaced in time that they can't be visually separated, and explosions.
Most volcano-related earthquakes are too small to feel, generally quite shallow (usually within 10 km (7 mi) of the surface), and can occur in swarms consisting of dozens to hundreds of events. Most swarms usually don't lead to eruptions, but most eruptions are preceded by swarms. Therefore, during any heightened periods of seismic activity at a volcano, seismologists work around the clock to detect subtle variations in the type, location, and intensity of seismic activity to determine whether or not an eruption may occur. | http://volcanoes.usgs.gov/vhp/earthquakes.html |
4.125 | Religious Holidays Teacher Resources
Find Religious Holidays educational ideas and activities
Showing 1 - 20 of 54 resources
Public Christmas Displays and Lynch v. Donnelly
Does a Christmas display on government property violate the Constitution? Learners study the Establishment Clause of the First Amendment and learn about the landmark Supreme Court case Lynch v. Donnelly through watching a documentary and...
10th - 12th Social Studies & History CCSS: Adaptable
Teaching the Easter Story
If you are looking for a secular approach to teaching about Easter, this may just be the resource for you. Pupils read a paraphrased text depicting the last supper, arrest, and crucifixion of Jesus Christ as told in the Bible, while also...
4th - 8th Social Studies & History CCSS: Adaptable
We Wish You a Merry Something, America
In this we wish you a merry something, America worksheet, 8th graders read or listen to a paragraph explaining that the term merry Christmas is not politically correct in America. Students discuss 4 warm up topics, complete pre-reading,...
8th Social Studies & History
The Diversity of Filipinos in the United States
ELLs are introduced to the experiences of Filipino immigrants to the United States. As a class, they discuss the various waves of immigration to the United States and state the reasons why they would leave the Philippines. They compare...
9th - 12th English Language Arts
7th Grade World Religions Research Worksheet
A perfect resource to guide your young research analysts as they dive into a paper about any one of the major world religions. It includes very basic questions that pertain to the information necessary to compose a good expository paper...
7th Social Studies & History
Making The Holidays Special
Students examine ways in which holiday television specials reflect some of the religious, historic and cultural themes of the holidays on which they focus. They create their own holiday television specials in groups, each focusing on a...
6th - 12th Visual & Performing Arts
Developing a Sense of Pride in Oneself and Respecting the Similarities and Differences of Others
First graders use the think/pair/share strategy to show the similarities and differences of their holidays. They discuss reasons it's important to accept the different ways people celebrate. Students listen as the teacher reads "Uncle...
1st Social Studies & History
Life of a Child Living at Monticello and a Quaker Village
Learners study the life of a child living at Thomas Jefferson's home of Monticello contrasted to the life of child living on a Quaker settlement. In this early Virginia history lesson, students read background information about the life...
4th - 7th English Language Arts | http://www.lessonplanet.com/lesson-plans/religious-holidays |
4.0625 | What is a rubbing?
Content of Inscriptions
There is no civilization that has relied as much as the Chinese on carving inscriptions
into stone as a way of preserving the memory of its history and culture. Records
of important events were inscribed on bone and bronze as early as the second
millennium B.C., and brick, tile, ceramics, wood, and jade were also engraved
to preserve writings and pictorial representations; but the medium most used
for long inscriptions was stone.
The most extensive of several large projects
to preserve authoritative texts was the carving of the Buddhist canon on 7,137
stone tablets or steles—over 4 million characters—in an undertaking that continued
from 605 to 1096. Earlier, from 175 to 183, the seven Confucian Classics in
over 200 thousand characters were carved on 46 steles, front and back, to establish
and preserve standard versions of the texts for students, scholars, and scholar-officials
of the Eastern Han dynasty. The Confucian Classics were also inscribed by six
successive dynasties, the last engraving, by the Manchu Ch’ing dynasty, at the
end of the eighteenth century. At sacred sites, cliffs and rock faces were also
used for large religious inscriptions.
By the beginning of the seventh century, or perhaps much earlier, the Chinese
had found a method of making multiple copies of old inscribed records, using
paper and ink. Rubbings (also known as inked squeezes) in effect “print” the
inscription, making precise copies that can be carried away and distributed
in considerable numbers.
To make a rubbing, a sheet of moistened paper is laid on the inscribed surface
and tamped into every depression with a rabbit’s-hair brush. (By another method,
the paper is laid on dry, then brushed with a rice or wheat-based paste before
being tamped.) When the paper is almost dry, its surface is tapped with an inked
pad. The paper is then peeled from the stone. Since the black ink does not touch
the parts of the paper that are pressed into the inscription, the process produces
white characters on a black background. (If the inscription is cut in relief,
rather than intaglio, black and white are reversed.)
This technique appeared simultaneously with, if not earlier than, the development of printing in China.
Many scholars contend that block printing derived from the technique of making
impressions with carved seals: in printing, a mirror image is carved in relief
on a wood block; the surface that stands in relief is then inked, and paper
pressed onto it—the reverse of the method used for making rubbings.
A rubbing, by accurately reproducing every line of the inscription in a white
impression on black ground, provides a sharper and more readable image than
the original inscription or a photograph of the original. The advantage of this
technique is that it may be applied to any hard surface, including rock faces
or cliffsides, pictorial reliefs, or even bronze vessels and figurines. As long
as the object inscribed is in good condition, a rubbing of it can be made, regardless
of its age or location. And by providing an accurate replica of the surface
of a given inscription or relief, a rubbing gives the scholar, and especially
the student of calligraphy, insights that simple transcriptions or freehand
copies, subject to scribal errors and the copyist’s skill, cannot.
Rubbings made a century ago preserve a far better record of the inscription than the
stone itself, which might have suffered from natural erosion, not to mention
damage caused by having been tamped in the process of taking thousands of rubbings.
Early rubbings, therefore, are invaluable sources, preserving impressions of
countless inscriptions now defaced or completely lost. Paradoxically, it is
paper, usually thought of as a fragile medium, that preserves unique copies
of inscriptions that were conceived of as permanent records in stone.
For at least a thousand years, scholars and connoisseurs have colleted rubbings,
pressing seals of ownership on them as they do on paintings, calligraphy, and
other prized objects in their collections. The East Asian Library’s collection
includes rubbings with collectors seals from as early as the seventeenth century,
authenticating not only the age of the rubbings, but also the inscriptions from
which they were taken.
Content of Inscriptions:
The scope of the content of inscriptions is encyclopedic, ranging from canonical
texts sanctioned by the emperor to personal epitaphs and eulogies. Inscriptions,
characteristically those on large upright stone slabs or steles, often provide
historical information unavailable elsewhere, paying tribute to local personages
by setting down their careers and deeds, or recording local events, military
campaigns and victories, the establishment or reconstruction of temples, charitable
subscriptions to religious institutions, hospitals, and orphanages, and meetings
of guilds. They are often unique sources of information about local matters
and persons, since the official histories and dynastic compilations heavily
favor imperial affairs and the practice of statecraft.
As sources for research in textual criticism, rubbings provide incontrovertible
evidence that can be accurately dated. They provide variant readings and, in
some cases, whole passages that have been dropped in the transmission of a published
text or manuscript. These variants are especially important since the whole
process of textual editing, collation, publishing, and transmission was governed
by the dictates of orthodox Confucianism or other systems of thought and religion.
Early inscriptions are usually more reliable than documents preserved in printed
or manuscript form. Texts inscribed in stone or metal are not easily altered
to reflect change in official policy or thought; this is especially true of
inscribed texts only recently uncovered by archaeologists.
For the study of the history of writing and calligraphy, from the earliest
script on shell and bone down to the running and cursive styles of later masters,
inscriptions are irreplaceable sources. They trace the evolution of writing,
century after century. Since the early dynasties, too, inscriptions have been
carved in stone to preserve examples of the styles of great calligraphers. Rubbings
of engraved models of calligraphy, known as fa-t’ieh 法帖
are the most widely reproduced and consulted genre of rubbings in China, Japan,
and Korea today.
UC Berkeley East Asian Library, Phone: 510 642-2556, Address: 208 Durant Hall, Berkeley CA 94720-6000
Copyright © 2004 The Regents of the University of California. All Rights Reserved
URL http://www.lib.berkeley.edu/EAL/stone/rubbings.html, last updated September 23, 2004 | http://www.lib.berkeley.edu/EAL/stone/rubbings.html |
4.15625 | Ireland committed to promote children’s rights when it signed up to the United Nations Convention on the Rights of the Child (UNCRC) in 1992. The Children’s Rights Alliance uses the Convention as a framework to change Ireland’s laws, policies and services so that all children are protected, nurtured and empowered. This brings children’s rights to the top of the agenda of our Government, legislators and key decision-makers.
What Does the Convention on the Rights of the Child Say?
The UNCRC defines the child as a person under 18 years of age. It acknowledges the primary role of parents and the family in the care and protection of children, as well as the obligation of the State to help them carry out these duties. Read the full text of the United Nations Convention on the Rights of the Child.
The UN Convention consists of 41 articles, each of which details a different type of right. These rights are not ranked in order of importance; instead they interact with one another to form one integrated set of rights. A common approach is to group these articles together under the following themes:
- Survival rights: include the child’s right to life and the needs that are most basic to existence, such as nutrition, shelter, an adequate living standard, and access to medical services.
- Development rights: include the right to education, play, leisure, cultural activities, access to information, and freedom of thought, conscience and religion.
- Protection rights: ensure children are safeguarded against all forms of abuse, neglect and exploitation, including special care for refugee children; safeguards for children in the criminal justice system; protection for children in employment; protection and rehabilitation for children who have suffered exploitation or abuse of any kind.
- Participation rights: encompass children's freedom to express opinions, to have a say in matters affecting their own lives, to join associations and to assemble peacefully. As their capacities develop, children should have increasing opportunity to participate in the activities of society, in preparation for adulthood.
The UN Convention includes four articles that are given special emphasis. These are also known as ‘general principles’. These rights are the bedrock for securing the additional rights in the UN Convention.
- that all the rights guaranteed by the UNCRC must be available to all children without discrimination of any kind (Article 2);
- that the best interests of the child must be a primary consideration in all actions concerning children (Article 3);
- that every child has the right to life, survival and development (Article 6); and
- that the child’s view must be considered and taken into account in all matters affecting him or her (Article 12).
Watch Our Animation on the Convention on the Rights of the Child
Implementing the Convention on the Rights of the Child
When Ireland signed the UN Convention on the Rights of the Child (UNCRC), the Government agreed to be assessed periodically by the UN on its progress in implementing the rights in the Convention. This means that every few years the State submits a progress report to the UN Committee on the Rights of the Child and agrees to an oral examination by the Committee members. The Children’s Rights Alliance also submits an independent report on behalf of non-governmental organisations (NGOs). This is known as the ‘Parallel Report’ and we have done this three times; in 1998, in 2006 and in 2015. Read more about the reporting process.
Read More About the Convention on the Rights of the Child
Full text UNCRC English
Full text UNCRC Irish
Summary of the UNCRC information Sheet
What is the UNCRC information Sheet
History of the UNCRC Information Sheet
Children's Rights Alliance UNCRC Parallel Report (1997)
Children's Rights Alliance UNCRC Parallel Report (2006)
Children's Rights Alliance Parallel Report 'Are We There Yet' (2015) | http://www.childrensrights.ie/childrens-rights-ireland/un-convention-rights-child |
4.15625 | A poetic text is a complex reality with visual qualities, musical qualities and linguistic aspects, all of which need to be considered. The visual form the poem takes on the page is called lay-out. It signals to the reader that the text is a poem because it follows a number of typographical conventions which are peculiar to poetry. Here are the most frequent.
1- &n 252c29c bsp; Words are arranged into lines which usually don't cover the whole page as in prose.
2- &n 252c29c bsp; Lines are grouped together. Each group is separated from the next by a space. Such groups are called stanzas.
3- &n 252c29c bsp; Lines usually begin with a capital letter
4- &n 252c29c bsp; Some lines may be indented.
As regards sound, lines in a poem can come close to the condition of music through several devices, such as rhyme and stress. Lines rhyme when their last syllables make the same sound. It is sound, not spelling that determines rhyme: in Dreams "die" rhymes with "fly" and "go" with "snow". Stress is another powerful device that in poetry can generate musical effects. Stressed and unstressed syllables can alternate in a line in several combinations which are called by different names. The pattern unstress-stress is called iambic and it is the most common pattern in English poetry. Also language in poetry needs close analysis because it is carefully chosen and arranged in order to establish meaningful connections, introduce images and generate a design of words and structures. Repetition can stress key words or concepts and add to the musical qualities of the text. All these features make the language of poetry different from ordinary language. To identify the characteristics of a poetic text at visual level, sound level, and language level is only the initial move in the analysis of a poem. Two more moves are necessary. The first is to recognise and explain what part each characteristic plays in conveying the poem's meaning - in other words, what functions they serve. The second is to see how level interacts with the other and how all contribute to the expression of the poem's main idea. Though in the course of the analysis you take the next to pieces, the poems remain intact. A poetic text is not a sum of fragments but an organic unit where the elements interact with one another and with the reader to generate meaning.
A traditional poem has a number of formal features which enable us to describe it as different from prose. Of the many recurrent features a poem may have, rhyme and rhythm are about the sound patterns of a poetic text.
Rhyme is a sound pattern which involves regular repetition of consonant and vowel sounds. Rhymes may form a wide range of musical designs within a poem. The sound pattern they create is called rhyme scheme and can be identified by using letters of the alphabet. Another sound pattern is alliteration which is the repetition of the initial consonant sound in two or more words in a line or consecutive lines of a poem. Perfect rhyme and alliteration are not the only forms of sound correspondence between parts of the words. Words can be arranged in a poem so as to produce effects of assonance and consonance. The first is the repetition of middle vowel sounds between different consonant sounds; the second is the close repetition of identical consonant sounds after differing vowel sounds. Rhythm is a word of Greek origin meaning 'flowing'. It is part of language: when you speak you follow a certain rhythm even unconsciously. Poetry is rhythmical in the sense that it flows according to a musical movement decided by the poet. In our analysis of rhythm, we started by looking at the length of a line, which depends on the number of its syllables. The next step was to do with syllable stress. From the activities you should have observed that not all syllables in a word are stressed , not are all words in a line. The words which are stressed are pronounced more firmly and more slowly. In poetry lines consist of units of stressed and unstressed syllables which can alternate in several combinations or patterns called by different names. A unit of unstressed and stressed syllables makes up one foot. The type of foot depends on the number of syllables and on how stresses are arranged. An unstressed syllable followed by a stressed one, as in the words "first green", forms an iambic foot. Lines of poetry made up predominantly of iambs are referred to as iambic verse, which is the most common metre in English. Its most important form is the iambic pentameter which consists of five iambic feet. When the iambic pentameter does not rhyme it is called blank verse. Blank verse is extremely flexible and can come very close to everyday speech.
CVC perfect rhyme
The visual form the poem takes on the page is called lay-out and is the third formal feature by which a poetic text marks itself off from prose. It usually follows a number of typographical conventions which are peculiar to poetry. Here are the most frequent.
1 Words are arranged into lines which usually don't cover the whole page as in prose
3 Some lines may be indented
4 Lines are grouped together. Each group is separated from the next by a space. Such groups are called stanzas.
The shape of a poem may largely depend on its stanza form. Traditional stanzas have the same number of lines and usually share the same rhyme scheme and stress pattern. For example, couplets rhyme aa,bb,cc,.; a tercet usually rhymes aba,bcb; a quatrain, which in English verse in the commonest stanza form, has a variety of rhyme schemes: abcb,abab,aabb,etc.
The arrangement of words into lines can create pauses, too, which provide a guide to reading, regulating both speed and sense. And how lines end can have important effects upon a poem. A line can be end-stopped when the meaning and the syntax pause or run-on, when meaning and syntax continue without a pause into the next line.
Poets can adopt regular stress patterns, rhyme schemes and stanza forms to write poems which please the eye by their typographical symmetry, or they can feel free to shorten or lengthen the lines, to introduce indentations and make the most of the white space on the page to create unconventional designs. Since the Second World War various poets have made more experimental use of visual lay-out, creating images out of letters and words. These images are known as 'concrete poems'. To conclude, the lay-out can be exploited alongside other features of poetic text to fulfil several functions, such as:
- &n 252c29c bsp; &n 252c29c bsp; giving prominence to words in isolation
- &n 252c29c bsp; &n 252c29c bsp; introducing divisions into parts of the poem
- &n 252c29c bsp; &n 252c29c bsp; reinforcing syntax and punctuation
- &n 252c29c bsp; &n 252c29c bsp; slowing down the pace of reading
- &n 252c29c bsp; &n 252c29c bsp; drawing a visual representation of the message itself
- &n 252c29c bsp; &n 252c29c bsp; amusing the reader
Commentare questo articolo:Non sei registrato
Devi essere registrato per commentare
E 'stato utile?
Copiare il codice
nella pagina web del tuo sito.
Copyright InfTub.com 2016 | http://www.inftub.com/letteratura/lettere/A-poetic-text-is-a-complex-rea35226.php |
4.0625 | Big seeds produced by many tropical trees were probably once ingested and then defecated whole by huge mammals called gomphotheres that dispersed the seeds over large distances. But gomphotheres were probably hunted to extinction more than 10,000 years ago. So why aren't large-seeded plants also extinct? A new Smithsonian report to be published in the early online edition of Proceedings of the National Academy of Sciences during the week of July 16, suggests that rodents may have taken over the seed dispersal role of gomphotheres.
By attaching tiny radio transmitters to more than 400 seeds, Patrick Jansen, scientist at the Smithsonian Tropical Research Institute and Wageningen University, and his colleagues found that 85 percent of the seeds were buried in caches by agoutis, common, house cat-sized rodents in tropical lowlands. Agoutis carry seeds around in their mouths and bury them for times when food is scarce.
Radio tracking revealed a surprising finding: when the rodents dig up the seeds, they usually do not eat them, but instead move them to a new site and bury them, often many times. One seed in the study was moved 36 times, traveling a total distance of 749 meters and ending up 280 meters from its starting point. It was ultimately retrieved and eaten by an agouti 209 days after initial dispersal.
Researchers used remote cameras to catch the animals digging up cached seeds. They discovered that frequent seed movement was primarily caused by animals stealing seeds from one another. Ultimately, 35 percent of the seeds ended up more than 100 meters from their origin. "Agoutis moved seeds at a scale that none of us had ever imagined," said Jansen.
"Previously, researchers had observed seeds being moved and buried up to five times, but in this system it seems that this re-caching behavior was on steroids," said Ben Hirsch, who ran the fieldwork as a post-doctoral fellow at the Smithsonian Tropical Research Institute in Panama. "By radio-tagging the seeds, we were able to track them as they were moved by agoutis, to find out if they were taken up into trees by squirrels, and to discover seeds inside spiny rat burrows. This resolution allowed us to gain a much better understanding of how each rodent species affects seed dispersal and survival."
By taking over the role of large Pleistocene mammals in dispersing these large seeds, thieving, scatter-hoarding agoutis may have saved these tree species from extinction.
|Contact: Sonia Tejada|
Smithsonian Tropical Research Institute | http://www.bio-medicine.org/biology-news-1/Have-thieving-rodents-saved-tropical-trees-3F-25859-1/ |
4.03125 | Bifurcations: Answer to Example 1
Example: Consider the autonomous equation
with parameter a.
- Draw the bifurcation diagram for this differential equation.
- Find the bifurcation values, and describe how the behavior of the solutions changes close to each bifurcation value.
- First, we need to find the equilibrium points (critical points). They are found by setting . We get
This a quadratic equation which solves into
- if , then we have two equilibrium points;
- if , then we have one equilibrium point;
- and if , then we have no equilibrium points.
This clearly implies that the bifurcation occurs when
or equivalently , which gives
The bifurcation diagram is given below. The equilibrium points are
pictured in white, red colored areas are areas with "up" arrows, and
blue colored areas are areas with "down" arrows.
- The bifurcation values are a = 4 and a = -4.
Let us discuss what is happening around a=4
(similar conclusions hold for the other value):
- Left of a=4: no equilibrium;
- At a=4: we have a
node (up), i.e. attractive from below and repelling from above
(look at the bifurcation diagram);
- Right of a=4: we have two equilibria,
the smaller one is a sink, the bigger one is a source,
which explains the node behavior of .
S.O.S. MATHematics home page
Do you need more help? Please post your question on our
S.O.S. Mathematics CyberBoard.
Last Update 6-23-98
Copyright © 1999-2016 MathMedics, LLC. All rights reserved.
Math Medics, LLC. - P.O. Box 12395 - El Paso TX 79913 - USA
users online during the last hour | http://www.sosmath.com/diffeq/first/bifurcation/example1/answer/answer.html |
4.21875 | Heart murmurs and other sounds
Toggle: English / Spanish
A heart murmur is a blowing, whooshing, or rasping sound heard during a heartbeat. The sound is caused by turbulent (rough) blood flow through the heart valves or near the heart.
Chest sounds - murmurs; Heart sounds - abnormal; Murmur - innocent; Innocent murmur; Systolic heart murmur; Diastolic heart murmur
The heart has four chambers:
- Two upper chambers (atria)
- Two lower chambers (ventricles)
The heart has valves that close with each heartbeat, causing blood to flow in only one direction. The valves are located between the chambers.
Murmurs can happen for many reasons, such as:
- When a valve does not close tightly and blood leaks backward (regurgitation)
- When blood flows through a narrowed or stiff heart valve (stenosis)
There are several ways in which your doctor may describe a murmur:
- Murmurs are classified ("graded") depending on how loud the murmur sounds with a stethoscope. The grading is on a scale. Grade I can barely be heard. An example of a murmur description is a "grade II/VI murmur." (This means the murmur is grade 2 on a scale of 1 - 6).
- In addition, a murmur is described by the stage of the heartbeat when the murmur is heard. A heart murmur may be described as systolic or diastolic.
When a murmur is more noticeable, the doctor may be able to feel it with the palm of the hand over the heart.
Things the doctor will look for in the exam include:
- Does the murmur occur when the heart is resting or contracting?
- Does it last throughout the heartbeat?
- Does it change when you move?
- Can it be heard in other parts of the chest, on the back, or in the neck?
- Where is the murmur heard the loudest?
Many heart murmurs are harmless. These types of murmurs are called innocent murmurs. They will not cause any symptoms or problems. Innocent murmurs do not need treatment.
Other heart murmurs may indicate an abnormality in the heart. These abnormal murmurs can be caused by:
Significant murmurs in children are more likely to be caused by:
Multiple murmurs may result from a combination of heart problems.
Children often have murmurs as a normal part of development. These murmurs do not need treatment. They may include:
- Pulmonary flow murmurs
- Still's murmur
- Venous hum
What to Expect at Your Office Visit
A doctor or nurse can listen to your heart sounds by placing a stethoscope over your chest. You will be asked questions about your medical history and symptoms, such as:
- Have other family members had murmurs or other abnormal heart sounds?
- Do you have a family history of heart problems?
- Do you have chest pain, fainting, shortness of breath, or other breathing problems?
- Have you had swelling, weight gain, or bulging veins in the neck?
- Does your skin have a bluish color?
The doctor may ask you to squat, stand, or hold your breath while bearing down or gripping something with your hands to listen to your heart.
The following tests may be done:
Goldman L. Approach to the patient with possible cardiovascular disease. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine. 24th ed. Philadelphia, PA: Saunders Elsevier; 2011:chap 50.
Sabatine MS, Cannon CP. The history and physical examination: An evidence-based approach. In: Bonow RO, Mann DL, Zipes DP, Libby P, eds. Braunwald's Heart Disease: A Textbook of Cardiovascular Medicine. 9th ed. Philadelphia, PA: Saunders Elsevier; 2011:chap 12.
Nishimura RA, Otto CM, Bonow RO, et al. 2014 AHA/ACC Guideline for the Management of Patients With Valvular Heart Disease. Circulation. 2014;129(23):2440-92.
- Last reviewed on 5/13/2014
- Michael A. Chen, MD, PhD, Associate Professor of Medicine, Division of Cardiology, Harborview Medical Center, University of Washington Medical School, Seattle, WA. Also reviewed by David Zieve, MD, MHA, Isla Ogilvie, PhD, and the A.D.A.M. Editorial team.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997- 2013 A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited. | http://umm.edu/health/medical/ency/articles/heart-murmurs-and-other-sounds |
4.40625 | Mudrocks are a class of fine grained siliciclastic sedimentary rocks. The varying types of mudrocks include: siltstone, claystone, mudstone, slate, and shale. Most of the particles are less than 0.0625 mm (1/16th mm or 0.0025 inches) and are too small to study readily in the field. At first sight the rock types look quite similar; however, there are important differences in composition and nomenclature. There has been a great deal of disagreement involving the classification of mudrocks. There are a few important hurdles to classification, including:
- Mudrocks are the least understood, and one of the most understudied sedimentary rocks to date
- It is difficult to study mudrock constituents, due to their diminutive size and susceptibility to weathering on outcrops
- And most importantly, there is more than one classification scheme accepted by scientists
Mudrocks make up fifty percent of the sedimentary rocks in the geologic record, and are easily the most widespread deposits on Earth. Fine sediment is the most abundant product of erosion, and these sediments contribute to the overall omnipresence of mudrocks. With increased pressure over time the platey clay minerals may become aligned, with the appearance of fissility or parallel layering. This finely bedded material that splits readily into thin layers is called shale, as distinct from mudstone. The lack of fissility or layering in mudstone may be due either to original texture or to the disruption of layering by burrowing organisms in the sediment prior to lithification.
From the beginning of civilization, when pottery and mudbricks were made by hand, to now, mudrocks have been important. The first book on mudrocks, Geologie des Argils by Millot, was not published until 1964; however, scientists, engineers, and oil producers have understood the significance of mudrocks since the discovery of the Burgess Shale and the relatedness of mudrocks and oil. Literature on the elusive yet omnipresent rock-type has been increasing in recent years, and technology continues to allow for better analysis.
- 1 Nomenclature
- 2 Creation of mud and mudrocks
- 3 Important properties
- 4 Why mudrocks are important
- 5 References
Mudrocks, by definition, consist of at least fifty percent mud-sized particles. Specifically, mud is composed of silt-sized particles that are between 1/16 – 1/256 of a millimeter in diameter, and clay-sized particles which are less than 1/256 millimeter.
Mudrocks contain mostly clay minerals, and quartz and feldspars. They can also contain the following particles at less than 63 micrometres: calcite, dolomite, siderite, pyrite, marcasite, heavy minerals, and even organic carbon.
There are various synonyms for fine-grained siliciclastic rocks containing fifty percent or more of its constituents less than 1/256 of a millimeter. Mudstones, shales, lutites, and argillites are common qualifiers, or umbrella-terms; however, the term mudrock has increasingly become the terminology of choice by sedimentary geologists and authors.
The term "mudrock" allows for further subdivisions of siltstone, claystone, mudstone, and shale. For example, a siltstone would be made of more than 50-percent grains that equate to 1/16 - 1/256 of a millimeter. "Shale" denotes fissility, which implies an ability to part easily or break parallel to stratification. Siltstone, mudstone, and claystone implies lithified, or hardened, detritus without fissility.
Overall, "mudrocks" may be the most useful qualifying term, because it allows for rocks to be divided by its greatest portion of contributing grains and their respective grain size, whether silt, clay, or mud.
|Type||Min grain||Max grain|
|Claystone||0 µm||4 µm|
|Mudstone||0 µm||64 µm|
|Siltstone||4 µm||64 µm|
|Shale||0 µm||64 µm|
A claystone is lithified, and non-fissile mudrock. In order for a rock to be considered a claystone, it must consist of up to fifty percent clay, which measures less than 1/256 of a millimeter in particle size. Clay minerals are integral to mudrocks, and represent the first or second most abundant constituent by volume. There are 35 recognized clay mineral species on Earth. They make muds cohesive and plastic, or able to flow. Clay is by far the smallest particles recognized in mudrocks. Most materials in nature are clay minerals,[clarification needed] but quartz, feldspar, iron oxides, and carbonates can weather to sizes of a typical clay mineral.
For a size comparison, a clay-sized particle is 1/1000 the size of a sand grain. This means a clay particle will travel 1000 times further at constant water velocity, thus requiring quieter conditions for settlement.
The formation of clay is well understood, and can come from soil, volcanic ash, and glaciation. Ancient mudrocks are another source, because they weather and disintegrate easily. Feldspar, amphiboles, pyroxenes, and volcanic glass are the principle donors of clay minerals.
The terminology of "mudstone" is not to be confused with the Dunham classification scheme for limestones. In Dunham's classification, a mudstone is any limestone containing less than ten percent carbonate grains. Note, a siliciclastic mudstone does not deal with carbonate grains. Friedman, Sanders, and Kopaska-Merkel (1992) suggest the use of "lime mudstone" to avoid confusion with siliciclastic rocks.
A siltstone is a lithified, non-fissile mudrock. In order for a rock to be named a siltstone, it must contain over fifty percent silt-sized material. Silt is any particle smaller than sand, 1/16 of a millimeter, and larger than clay, 1/256 of millimeter. Silt is believed to be the product of physical weathering, which can involve freezing and thawing, thermal expansion, and release of pressure. Physical weathering does not involve any chemical changes in the rock, and it may be best summarised as the physical breaking apart of a rock.
One of the highest proportions of silt found on Earth is in the Himalayas, where phyllites are exposed to rainfall of up to five to ten meters (16 to 33 feet) a year. Quartz and feldspar are the biggest contributors to the silt realm, and silt tends to be non-cohesive, non-plastic, but can liquefy easily.
There is a simple test that can be done in the field to determine whether a rock is a siltstone or not, and that is to put the rock to one's teeth. If the rock feels "gritty" against one's teeth, then it is a siltstone.
Shale is a fine grained, hard, laminated mudrock, consisting of clay minerals, and quartz and feldspar silt. Shale is lithified and fissile. It must have at least 50-percent of its particles measure less than 0.062 mm. This term is confined to argillaceous, or clay-bearing, rock.
There are many varieties of shale, including calcareous and organic-rich; however, black shale, or organic-rich shale, deserves further evaluation. In order for a shale to be a black shale, it must contain more than one percent organic carbon. A good source rock for hydrocarbons can contain up to twenty percent organic carbon. Generally, black shale receives its influx of carbon from algae, which decays and forms an ooze known as sapropel. When this ooze is cooked at desired pressure, three to six kilometers (1.8 - 3.7 miles) depth, and temperature, 90–120 °C (194–248 °F), it will form kerogen. Kerogen can be heated, and yield up to 10–150 US gallons (0.038–0.568 m3) of product per ton of rock.
Slate is a hard mudstone that has undergone metamorphosis, and has well-developed cleavage. It has gone through metamorphism at temperatures between 200–250 °C (392–482 °F), or extreme deformation. Since slate is formed in the lower realm of metamorphism, based on pressure and temperature, slate retains its stratification and can be defined as a hard, fine-grained rock.
Slate is often used for roofing, flooring, or old-fashioned stone walls. It has an attractive appearance, and its ideal cleavage and smooth texture are desirable.
Creation of mud and mudrocks
Most mudrocks form in oceans or lakes, because these environments provide the quiet waters necessary for deposition. Although mudrocks can be found in every depositional environment on Earth, the majority are found in lakes and oceans.
Mud transport and supply
Heavy rainfall provides the kinetic motion necessary for mud, clay, and silt transport. Southeast Asia, including Bangladesh and India, receives high amounts of rain from monsoons, which then washes sediment from the Himalayas and surrounding areas to the Indian Ocean.
Warm, wet climates are best for weathering rocks, and there is more mud on ocean shelves off tropical coasts than on temperate or polar shelves. The Amazon system, for example, has the third largest sediment load on Earth, with rainfall providing clay, silt, and mud from the Andes in Peru, Ecuador, and Bolivia.
Rivers, waves, and longshore currents segregate mud, silt, and clay from sand and gravel due to fall velocity. Longer rivers, with low gradients and large watersheds, have the best carrying capacity for mud. The Mississippi River, a good example of long, low gradient river with a large amount of water, will carry mud from its northernmost sections, and deposit the material in its mud-dominated delta.
Mudrock depositional environments
Below is a listing of various environments that act as sources, modes of transportation to the oceans, and environments of deposition for mudrocks.
Alluvial valleys, such as the Ganges in India, the Yellow in China, and Lower Mississippi in the United States are good examples of alluvial valleys. These systems have a continuous source of water, and can contribute mud through overbank sedimentation, when mud and silt is deposited overbank during flooding, and oxbow sedimentation where an abandoned stream is filled by mud.
In order for an alluvial valley to exist there must be a highly elevated zone, usually uplifted by active tectonic movement, and a lower zone, which acts as a conduit for water and sediment to the ocean.
Vast quantities of mud and till are generated by glaciations and deposited on land as till and in lakes. Glaciers can erode already susceptible mudrock formations, and this process enhances glacial production of clay and silt.
The Northern Hemisphere contains 90-percent of the world's lakes larger than 500 km (310 mi), and glaciers created many of those lakes. Lake deposits formed by glaciation, including deep glacial scouring, are abundant.
Although glaciers formed 90-percent of lakes in the Northern Hemisphere, they are not responsible for the formation of ancient lakes. Ancient lakes are the largest and deepest in the world, and hold up to twenty percent of today's petroleum reservoirs. They are also the second most abundant source of mudrocks, behind marine mudrocks.
Ancient lakes owe their abundance of mudrocks to their long lives and thick deposits. These deposits were susceptible to changes in oxygen and rainfall, and offer a robust account of paleoclimate consistency.
A delta is a subaerial or subaqueous deposit formed where rivers or streams deposit sediment into a water body. Deltas, such as the Mississippi and Congo, have massive potential for sediment deposit, and can move sediments into deep ocean waters. Delta environments are found at the mouth of a river, where its waters slow as they enter the ocean, and silt and clay are deposited.
Low energy deltas, which deposit a great deal of mud, are located in lakes, gulfs, seas, and small oceans, where coastal currents are also low. Sand and gravel-rich deltas are high-energy deltas, where waves dominate, and mud and silt are carried much farther from the mouth of the river.
Coastal currents, mud supply, and waves are a key factor in coastline mud deposition. The Amazon River supplies 500 million tons of sediment, which is mostly clay, to the coastal region of northeastern South America. 250 tons of this sediment moves along the coast and is deposited. Much of the mud accumulated here is more than 20 meters (65 feet) thick, and extends 30 kilometers (19 mi) into the ocean.
Much of the sediment carried by the Amazon can come from the Andes mountains, and the final distance traveled by the sediment is 6,000 km (3,700 mi).
70-percent of the Earth's surface is covered by ocean, and marine environments are where we find the world's highest proportion of mudrocks. There is a great deal of lateral continuity found in the ocean, as opposed to continents which are confined.
In comparison, continents are temporary stewards of mud and silt, and the inevitable home of mudrock sediments is the oceans. Reference the mudrock cycle below in order to understand the burial and resurgence of the various particles
There are various environments in the oceans, including deep-sea trenches, abyssal plains, volcanic seamounts, convergent, divergent, and transform plate margins. Not only is land a major source of the ocean sediments, but organisms living within the ocean contribute, as well.
The world's rivers transport the largest volume of suspended and dissolved loads of clay and silt to the sea, where they are deposited on ocean shelves. At the poles, glaciers and floating ice drop deposits directly to the sea floor. Winds can provide fine grained material from arid regions, and explosive volcanic eruptions contribute as well. All of these sources vary in the rate of their contribution.
Sediment moves to the deeper parts of the oceans by gravity, and the processes in the ocean are comparable to those on land.
Location has a large impact on the types of mudrocks found in ocean environments. For example, the Apalachicola River, which drains in the subtropics of the United States, carries up to sixty to eighty percent kaolinite mud, whereas the Mississippi carries only ten to twenty percent kaolinite.
The mudrock cycle
We can imagine the beginning of a mudrock's life as sediment at the top of a mountain, which may have been uplifted by plate tectonics or propelled into the air from a volcano. This sediment is exposed to rain, wind, and gravity which batters and breaks apart the rock by weathering. The products of weathering, including particles ranging from clay to silt, to pebbles and boulders, are transported to the basin below, where it can solidify into one if its many sedimentary mudstone types.
Eventually, the mudrock will move its way kilometers below the subsurface, where pressure and temperature cook the mudstone into a metamorphosed gneiss. The metamorphosed gneiss will make its way to the surface once again as country rock or as magma in a volcano, and the whole process will begin again.
Mudrocks form in various colors, including: red, purple, brown, yellow, green and grey, and even black. Shades of grey are most common in mudrocks, and darker colors of black come from organic carbons. Green mudrocks form in reducing conditions, where organic matter decomposes along with ferric iron. They can also be found in marine environments, where pelagic, or free-floating species, settle out of the water and decompose in the mudrock. Red mudrocks form when iron within the mudrock becomes oxidized, and depending on the intensity of red, one can determine if the rock has fully oxidized.
Fossils are well preserved in mudrock formations, because the fine-grained rock protects the fossils from erosion, dissolution, and other processes of erosion. Fossils are particularly important for recording past environments. Paleontologists can look at a specific area and determine salinity, water depth, water temperature, water turbidity, and sedimentation rates with the aid of type and abundance of fossils in mudrock
One of the most famous mudrock formations is the Burgess Shale in Western Canada, which formed during the Cambrian. At this site, soft bodied creatures were preserved, some in whole, by the activity of mud in a sea. Solid skeletons are, generally, the only remnants of ancient life preserved; however, the Burgess Shale includes hard body parts such as bones, skeletons, teeth, and also soft body parts such as muscles, gills, and digestive systems. The Burgess Shale is one of the most significant fossil locations on Earth, preserving innumerable specimens of 500 million year old species, and its preservation is due to the protection of mudrock.
Another noteworthy formation is the Morrison Formation. This area covers 1.5 million square miles, stretching from Montana to New Mexico in the United States. It is considered one of the world's most significant dinosaur burial grounds, and its many fossils can be found in museums around the world. This site includes dinosaur fossils from a few dinosaur species, including the Allosaurus, Diplodocus, Stegosaurus, and Brontosaurus. There are also lungfish, freshwater mollusks, ferns and conifers. This deposit was formed by a humid, tropical climate with lakes, swamps, and rivers, which deposited mudrock. Inevitably, mudrock preserved countless specimens from the late Jurassic, roughly 150 million years ago.
Petroleum and natural gas
Mudrocks, especially black shale, are the source and containers of precious petroleum sources throughout the world. Since mudrocks and organic material require quiet water conditions for deposition, mudrocks are the most likely resource for petroleum. Mudrocks have low porosity, they are impermeable, and often, if the mudrock is not black shale, it remains useful as a seal to petroleum and natural gas reservoirs. In the case of petroleum found in a reservoir, the rock surrounding the petroleum is not the source rock, whereas black shale is a source rock.
Why mudrocks are important
As noted before, mudrocks make up fifty percent of the Earth's sedimentary geological record. They are widespread on Earth, and important for various industries.
Metamorphosed shale can hold emerald and gold, and mudrocks can host ore metals such as lead and zinc. Mudrocks are important in the preservation of petroleum and natural gas, due to their low porosity, and are commonly used by engineers to inhibit harmful fluid leakage from landfills.
Sandstones and carbonates record high-energy events in our history, and they are much easier to study. Interbedded between the high-energy events are mudrock formations that have recorded quieter, normal conditions in our Earth's history. It is the quieter, normal events of our geologic history we don't yet understand. Sandstones provide the big tectonic picture and some indications of water depth; mudrocks record oxygen content, a generally richer fossil abundance and diversity, and a much more informative geochemistry.
- Boggs, S. (2005). Principles of Sedimentology and Stratigraphy (4th ed.). Upper Saddle River, N.J.: Prentice Hall. ISBN 0-13-099696-3.
- Stow, D.A.V. (2005). Sedimentary Rocks in the Field (1st ed.). Burlington, M.A.: Academic Press. ISBN 0-13-099696-3.
- Potter, P.E.; Maynard, J.B.; Depetris, P.J. (2005). Mud and Mudstones: Introduction and Overview (1st ed.). Wurzberg, Germany: Springer. ISBN 3-540-22157-3.
- Blatt, H.; Middleton, G.; Murray, R. (1980). Origin of Sedimentary Rocks (2nd ed.). Englewood Cliffs, N.J.: Prentice Hall. ISBN 0-13-642710-3.
- Schieber, J.; Zimmerle, W.; Sethi, P. (1998). Shales and Mudstones (1st ed.). Stuttgart, Germany: E. Schweizerbartsche Verlagsbuchhandlung. ISBN 3-510-65183-9.
- Pye, K. (1994). Sediment Transport and Depositional Processes (1st ed.). Berlin: Blackwell. ISBN 0-632-03112-3.
- Blatt, Harvey. 2005. Origin of Sedimentary Rocks. Prentice-Hall, New Jersey.
- Tucker, M.E. (1994). Sedimentary Petrology: An Introduction to the Origin of Sedimentary Rocks (3rd ed.). Malden, M.A.: Blackwell. ISBN 0-632-05735-1.
- The Burgess Shale Geoscience Foundation (2010). "Burgess Shale Fossils and their importance". Retrieved 25 October 2010.
- Nudds, J.R.; Selden, P.A. (2008). Fossil Ecosystems of North America: A Guide to the Sites and Their Extraordinary Biotas (1st ed.). Chicago: University Of Chicago Press. ISBN 0-226-60722-4. | https://en.wikipedia.org/wiki/Mudrock |
4.09375 | By Peter McKenzie-Brown
The two most basic language skills, listening and speaking, sound exactly alike when we describe them as oral and aural skills. “Aural” language, of course, refers to language as we hear it. “Oral” language is what we say.
These two words are “homophones” – words spelled differently that sound alike. There is no good reason why they should be homophones, but they are. Perhaps that accident of spelling can serve as a reminder that, while these two skills cannot be separated, they need to be developed in different ways.
Teaching Basic Skills: According to a hoary adage, “We are given two ears and one mouth so we can listen twice as much as we talk.” This is a maxim to remember when we plan our lessons – especially when we are dealing with a classroom of new learners.
Logically, listening should be the first skill you teach. In practice, however, most teachers get their students talking on the first day of class, and many make speech the major focus of their lessons. They tend to downplay the skill of listening, as do most foreign language textbooks. Yet listening is probably the more important skill involved in foreign language learning, as it certainly is in the acquisition of one’s native tongue.
Stephen Krashen and other thinkers have stressed that we acquire language best by using it in communicative ways. He was also one of the first to stress that language acquisition and language learning are not the same. Language learning (in the sense of making conscious discoveries about grammar, for instance) involves different mental processes, and those processes play distinctly secondary roles to those we use when we acquire language naturally. Language develops, he says, through exposure to and use of “comprehensible input” – target language the learner can understand and assimilate. All of this is textbook Krashen.
One reasonable conclusion from these observations is that language learners should understand what they are listening to before they begin to speak. Especially at the initial phase of language acquisition, teachers should avoid oral practice to some degree. Instead, they should have their students concentrate on comprehending what they hear. This idea parallels the experience of young children, who spend almost two years in linguistic silence before they begin to speak.
To use listening-focused learning, a communicative language teacher needs to incorporate active listening into their classes. This is done with activities in which the learners demonstrate that they understand, and receive gentle correction when they err. More advanced students must be explicitly taught to recognize reduced language forms heard in colloquial speech – as in “Whaddaya say?” Also, of course, part of aural comprehension is learning to decipher nonverbal clues.
Pure listening is rarely a good strategy for sustained language acquisition. Even if students are still in their silent period – a common phase for beginners, in which they speak very little if at all, – teachers should encourage active participation from them. This is the only way to confirm that they have understood. Participation can mean as little as a nod or a headshake, for example, or the words “yes” and “no” in English or their native language. Listening without speaking is important for foreign language learners, especially when their language learning has just begun, but at some level that listening should be participatory.
Listening activities do not always involve some other skill, but they generally do; the best classroom activities cross skill boundaries. Since the most typical pairing for a listening activity is to combine it with speech practice, a focus on listening can actually promote the effective development of speaking skills. To see how, let's turn to the activation of speech.
Focus on Conversation: Speaking activities best occur in classrooms in which learners feel comfortable and confident, free to take risks, and have plenty of opportunities to speak. While there are countless kinds of activities teachers use to develop speaking skills, they most commonly promote conversational speech. This, of course, requires the use of both listening and speaking skills.
Conversational language has four characteristics. It is interactive, in the sense that we talk back and forth in short bursts. Often, we do not even use complete sentences – “nice day, eh?” Conversation also has narrow time limits. We have to listen and respond without the luxury of thinking much about what we want to say. Conversation is also repetitive, in the sense that we tend to use a relatively small amount of vocabulary and a relatively small repertory of language structures.. And finally, of course, it is error-prone. Because of time limits, we may use the wrong word, pronounce something wrong or mangle structure. While we may hear the mistake and back up and correct ourselves, often we don’t.
Bearing in mind the earlier comments about listening, these characteristics of conversation illustrate an important difference between listening activities and speaking activities. Because listening is a learner’s primary source of comprehensible input, aural activities depend heavily on accuracy. To understand, learners must listen carefully, and their comprehension must be good. In many listening activities, we play a short recording of speech repeatedly until we think our learners understand it.
By contrast, learners shift heavily in the direction of fluency during conversation practice, which combines both listening and speaking skills. At this portion of the language class, the teacher kisses student accuracy goodbye. During speaking activities, the focus is on interactive, time-limited, repetitive and error-prone conversation. As is often the case in the language classroom, as we move from skill to skill, or from language study to language activation, we willingly compromise accuracy in the interest of fluency.
The How and Why of Language: Language originated with the two linguistic skills we have just reviewed – listening and speaking. But why? What is the purpose of language? And how did it evolve to play this role in our lives?
Whether we hear it or voice it, the purpose of language is to do the things that speech can do. In no way is it abstract. Like an axe, language is a tool with which we do things.
According to linguistic philosopher J.R. Searle, we use language to perform five kinds of “speech act”. These are commissive, declarative, directive, expressive and representative. Commissive speech commits the speaker to do something – for example, “I promise to bring it tomorrow,” or “Watch out or I will report you.” Declarations change the state of things – “I now pronounce you husband and wife,” or “You’re fired!” Directive speech gets the listener to do something – “Please come in,” “Watch out!” or “Why don’t you take your medicine?” Expressive language explains feelings and attitudes: “Those roses are beautiful,” or “I hate broccoli.” Finally, representative speech describes states or events – “Rice is an important Thai export,” or “The United States is at war again.” All of our speech seems to do one or more of these five things.
Language is such an important part of our lives that we use it to meet virtually all of our daily needs. Consider psychologist Abraham Maslow’s famous hierarchy of needs, which is often illustrated as a pyramid. In Maslow’s model, we can only move to a higher level of need after we have scrambled up the lower levels.
In his view, people have five kinds of need. Our most basic needs are physiological – food and water, for example. The next level up is the need for safety and security, which we achieve, for example, by dealing with emergencies. Tier 3 involves needs for love, affection and belongingness. The need for esteem – self-respect and respect from others – comes next, but the highest level in this hierarchy is the need for self-actualization. According to Maslow, in this last level “A musician must make music, an artist must paint, and a poet must write.” The point of this discussion is that we meet virtually all those needs through speech acts.
The gradual evolution of language has profoundly affected the nature of our species. As Stephen Pinker observes,
Human practical intelligence may have evolved with language (which allows know-how to be shared at low cost) and with social cognition (which allows people to cooperate without being cheated), yielding a species that literally lives by the power of ideas.It is impossible to overstate the value or complexity of language. It is perhaps the most fundamental feature of our lives. | http://languageinstinct.blogspot.com/2006/10/oral-and-aural-skills.html |
4 | Preemption Act, statute passed (1841) by the U.S. Congress in response to the demands of the Western states that squatters be allowed to preempt lands. Pioneers often settled on public lands before they could be surveyed and auctioned by the U.S. government. At first the squatter claims were not recognized, but in 1830 the first of a series of temporary preemption laws was passed by Congress. Opposition to preemption came from Eastern states, which saw any encouragement of western migration as a threat to their labor supply.
A permanent preemption act was passed only after the Eastern states had been placated by the principle of distribution (i.e., the proceeds of the government land sales would be distributed among the states according to population). Distribution was discarded in 1842, but the preemption principle survived. The act of 1841 permitted settlers to stake a claim of 160 acres (65 hectares) and after about 14 months of residence to purchase it from the government for as little as $1.25 an acre before it was offered for public sale. After the passage (1862) of the Homestead Act, the value of preemption for bona fide settlers declined, and the practice more and more became a tool for speculators. Congress repealed the Preemption Act in 1891.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. | http://www.factmonster.com/encyclopedia/history/preemption-act.html |
4.21875 | Who Fought for the Union?
In this activity students examine sheet music and letters from draft rioters to examine Union attitudes about the military draft during the Civil War.
Students will analyze letters and printed sheet music to determine attitudes in the North about the draft during the Civil War.
Students will identify who fought for the North and how the draft affected the composition of the Union army during the Civil War.
Step 1: Ask students to consider the question: Who fought for the Union during the Civil War?
Step 2: Break students into small groups of 3-5. In their groups, have students read the historical background information on the New York City draft riots of 1863.
Step 3: Next ask students to read "A New York Rioter Explains His Opposition to the Draft" and answer the following questions:
What is the letter's point of view? What is his argument on behalf of those who rioted to protest the draft?
What argument does the New York Times make about the draft in response to the letter writer?
Can you determine from the documents what each writer thinks about the causes of the Civil War?
Step 4: Ask students to look closely at the details in the sheet music--both the cover art and the song lyrics--and answer the following questions:
What images appear on the cover of the sheet music?
How do the two men depicted differ from each other? In their hair? facial expressions?
What is written over each of the images? Is this a clue to which man is the "substitute"?
What do the song lyrics describe? Do you think these lyrics are meant to be satirical?
What might the audience for a song like this have been? Do you think that the song format influenced the way people thought about its message about the draft?
Step 5: Have students discuss within their groups what they learned from the letter and from the sheet music about attitudes toward the Union military draft during the Civil War. As a group, students should summarize who fought for the Union. As a whole class, lead a discussion comparing and contrasting the information in the sheet music and the information in the letter. Ask students what kind of information sheet music conveys that is different than a letter to the editor in a newspaper.
Despite the economic hardships that secession brought on in many northern cities, the outbreak of fighting galvanized northern workers. White workers, both native- and foreign-born, rushed to recruiting stations. Midwestern farmers and laborers, the backbone of the free soil movement, also enlisted in large numbers, making up nearly half the Union Army. Most northerners believed that Union military victory over the Confederacy would be quick and decisive. The North possessed a larger population (more than twice that in the South), a growing industrial base, and a better transportation network. The quick military victory was not to be, however, and Union soldiers (along with their Confederate counterparts) suffered tremendous hardships. For every soldier who died as a result of battle, three died of disease. Food was scarce, as were fresh uniforms and even shoes. Medical care was primitive.
In March, 1863, faced with inadequate numbers of volunteers and rising numbers of deserters, the U.S. Congress passed a draft law. The Conscription Act made all single men aged twenty to forty-five and married men up to thirty-five subject to a draft lottery. In addition, the act allowed drafted men to avoid conscription entirely by supplying someone to take their place or to pay the government a three hundred-dollar exemption fee. Not surprisingly, only the wealthy could afford to buy their way out of the draft. Workers deeply resented both the draft law's profound inequality and the recent expansion of the North's war aims to include the emancipation of the slaves who, they assumed, would join already free blacks as competitors for scarce jobs after the war ended. When the draft was implemented in the summer of 1863, rioting broke out in several northern cities, and the most widespread and devastating violence occurred in New York City.
Creator | American Social History Project/Center for Media and Learning
Rights | Copyright American Social History Project/Center for Media and Learning This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.
Item Type | Teaching Activity
Cite This document | American Social History Project/Center for Media and Learning, “Who Fought for the Union?,” HERB: Resources for Teachers, accessed February 8, 2016, http://herb.ashp.cuny.edu/items/show/1433. | http://herb.ashp.cuny.edu/items/show/1433 |
4.15625 | Magnetic confinement fusion
Magnetic confinement fusion is an approach to generating fusion power that uses magnetic fields (which is a magnetic influence of electric currents and magnetic materials) to confine the hot fusion fuel in the form of a plasma. Magnetic confinement is one of two major branches of fusion energy research, the other being inertial confinement fusion. The magnetic approach is more highly developed and is usually considered more promising for energy production. Construction of a 500-MW heat generating fusion plant using tokamak magnetic confinement geometry, the ITER, began in France in 2007.
Fusion reactions combine light atomic nuclei such as hydrogen to form heavier ones such as helium. In order to overcome the electrostatic repulsion between them, the nuclei must have a temperature of several tens of millions of degrees, under which conditions they no longer form neutral atoms but exist in the plasma state. In addition, sufficient density and energy confinement are required, as specified by the Lawson criterion.
Magnetic confinement fusion attempts to create the conditions needed for fusion energy production by using the electrical conductivity of the plasma to contain it with magnetic fields. The basic concept can be thought of in a fluid picture as a balance between magnetic pressure and plasma pressure, or in terms of individual particles spiraling along magnetic field lines.
The pressure achievable is usually on the order of one bar with a confinement time up to a few seconds. In contrast, inertial confinement has a much higher pressure but a much lower confinement time. Most magnetic confinement schemes also have the advantage of being more or less steady state, as opposed to the inherently pulsed operation of inertial confinement.
The simplest magnetic configuration is a solenoid, a long cylinder wound with magnetic coils producing a field with the lines of force running parallel to the axis of the cylinder. Such a field would hinder ions and electrons from being lost radially, but not from being lost from the ends of the solenoid.
There are two approaches to solving this problem. One is to try to stop up the ends with a magnetic mirror, the other is to eliminate the ends altogether by bending the field lines around to close on themselves. A simple toroidal field, however, provides poor confinement because the radial gradient of the field strength results in a drift in the direction of the axis.
A major area of research in the early years of fusion energy research was the magnetic mirror. Most early mirror devices attempted to confine plasma near the focus of a non-planar magnetic field, or to be more precise, two such mirrors located close to each other and oriented at right angles. In order to escape the confinement area, nuclei had to enter a small annular area near each magnet. It was known that nuclei would escape through this area, but by adding and heating fuel continually it was felt this could be overcome. As development of mirror systems progressed, additional sets of magnets were added to either side, meaning that the nuclei had to escape through two such areas before leaving the reaction area entirely. A highly developed form, the Mirror Fusion Test Facility (MFTF), used two mirrors at either end of a solenoid to increase the internal volume of the reaction area.
An early attempt to build a magnetic confinement system was the stellarator, introduced by Lyman Spitzer in 1951. Essentially the stellarator consists of a torus that has been cut in half and then attached back together with straight "crossover" sections to form a figure-8. This has the effect of propagating the nuclei from the inside to outside as it orbits the device, thereby canceling out the drift across the axis, at least if the nuclei orbit fast enough. Newer versions of the stellarator design have replaced the "mechanical" drift cancellation with additional magnets that "wind" the field lines into a helix to cause the same effect.
In 1968 Russian research on the toroidal tokamak was first presented in public, with results that far outstripped existing efforts from any competing design, magnetic or not. Since then the majority of effort in magnetic confinement has been based on the tokamak principle. In the tokamak a current is periodically driven through the plasma itself, creating a field "around" the torus that combines with the toroidal field to produce a winding field in some ways similar to that in a modern stellarator, at least in that nuclei move from the inside to the outside of the device as they flow around it.
In 1991, START was built at Culham, UK, as the first purpose built spherical tokamak. This was essentially a spheromak with an inserted central rod. START produced impressive results, with β values at approximately 40% - three times that produced by standard tokamaks at the time. The concept has been scaled up to higher plasma currents and larger sizes, with the experiments NSTX (US), MAST (UK) and Globus-M (Russia) currently running. Spherical tokamaks have improved stability properties compared to conventional tokamaks and as such the area is receiving considerable experimental attention. However spherical tokamaks to date have been at low toroidal field and as such are impractical for fusion neutron devices.
Compact toroids, e.g. the spheromak and the Field-Reversed Configuration, attempt to combine the good confinement of closed magnetic surfaces configurations with the simplicity of machines without a central core. An early experiment of this type[dubious ] in the 1970's was Trisops. (Trisops fired two theta-pinch rings towards each other.)
|This section requires expansion. (June 2008)|
Magnetic fusion energy
All of these devices have faced considerable problems being scaled up and in their approach toward the Lawson criterion. One researcher has described the magnetic confinement problem in simple terms, likening it to squeezing a balloon – the air will always attempt to "pop out" somewhere else. Turbulence in the plasma has proven to be a major problem, causing the plasma to escape the confinement area, and potentially touch the walls of the container. If this happens, a process known as "sputtering", high-mass particles from the container (often steel and other metals) are mixed into the fusion fuel, lowering its temperature.
In 1997, scientists at the Joint European Torus (JET) facilities in the UK produced 16 megawatts of fusion power. Scientists can now exercise a measure of control over plasma turbulence and resultant energy leakage, long considered an unavoidable and intractable feature of plasmas. There is increased optimism that the plasma pressure above which the plasma disassembles can now be made large enough to sustain a fusion reaction rate acceptable for a power plant. Electromagnetic waves can be injected and steered to manipulate the paths of plasma particles and then to produce the large electrical currents necessary to produce the magnetic fields to confine the plasma. These and other control capabilities have come from advances in basic understanding of plasma science in such areas as plasma turbulence, plasma macroscopic stability, and plasma wave propagation. Much of this progress has been achieved with a particular emphasis on the tokamak.
- Category:Fusion reactors for specific experiments
- Gas torus
- Magnetized Liner Inertial Fusion
- List of plasma (physics) articles
- JET chronology
- ITER Physics Basis Editors (1999). "Chapter 6: Plasma auxiliary heating and current drive". Nucl. Fusion. ITER Physics Expert Group on Energetic Particles, Heating and Current drive 39: 2495.
- EFDA-JET web site
- Culham Centre for Fusion Energy, CCFE
- IAEA's information about JET
- Physics of magnetically confined plasmas
- General Atomics
- Fusion Wiki (specialist information) | https://en.wikipedia.org/wiki/Magnetic_confinement_fusion |
4.21875 | Compounds which have very few valence electrons (less than 8) or 8 valence electrons with empty‘d’ orbitals for forming covalent bonds between the atoms are called electron deficient compounds.
Compounds having highly polarized bonds due to highly electronegative ligands and whose valence shell are not completely occupied and can act as Lewis acids can also be called electron deficient compounds.
‘Yes’ BCl3 (boron trifluoride) is electron deficient compound but SiCl4 (silicon tetrafluoride) is not an electron deficient compound.
BCl3 – In boron trifluoride, boron has 3 valence electrons. After forming covalent bonds with three chlorine atoms boron will have 6 valence electrons around it. Even then boron is short of 2 electrons to form the octet configuration.
SiCl4 – In silicon tetrafluoride, silicon has 4 valence electrons. After forming covalent bonds with four chlorine atoms silicon will have 8 valence electrons around it.
Silicon cannot extend its covalency beyond 4, as it does not have‘d’ orbitals. Hence SiCl4 is not an electron deficient compound. | http://www.meritnation.com/ask-answer/question/what-are-electron-deficient-compounds-are-bcl3-and-sicl4/chemical-bonding-and-molecular-structure/3968192 |
4.03125 | 1. What is DNA?
DNA is the storehouse of genetic information for every known organism, with the exception of a few viruses. It’s a long, thin molecule – picture two strands that curve around each other, forming a double helix. Each strand spells out the genetic code as a chain of four chemical letters called bases: adenine (A), thymine (T), cytosine (C) and guanine (G).
Bases facing each other across two strands are always paired as follows: A with T and C with G. When cells duplicate their DNA, the two strands of the helix are “unzipped” and enzymes use them as a template to create a new version of the opposite strand.
2. What is a gene?
This is not a simple question.
For decades, biologists like the late Francis Crick – co-discoverer of DNA’s structure – confidently proclaimed that genes were regions of DNA that served as blueprints for proteins through a simple process: DNA is copied into the related chemical RNA, which then is whisked away to the cell’s protein manufacturing facility. Crick famously dubbed this definition of the gene the “central dogma of molecular biology”.
A gene’s protein sequence is spelled out as a series of three letter “words” – codons – composed of the four DNA bases. The codon “GGG” for instance, encodes the amino acid glycine.
Regions of DNA that do not produce proteins were therefore generally dismissed as functionless “junk DNA”. But it turns out that Crick’s dogma may have been a bit too, well, dogmatic.
Insights from the sequencing of the human genome have led some experts to argue that the purpose of many human genes is not to encode protein, but to spin out RNAs that serve many functions beyond that of middleman between DNA and protein. And the evolutionary conservation of some types of junk DNA suggests they serve important, if unknown, functions.
3. How do genes create organisms?
Genes are not always active. Sometimes they are busy churning out their encoded protein or RNA, sometimes they are shut off completely. And their activity can be tuned at different levels. During the development of a complex organism from a single cell, thousands of genes flash on and off in complicated patterns.
One of the most important jobs genes have is to create proteins called transcription factors, which coordinate the activities of other genes. As an eye or a finger is created, for example, transcription factors ensure that a characteristic series of genes get activated in surrounding tissue to build that structure.
Proteins and structures they create can also serve many other functions: generating energy, creating new molecules or serving directly as the brick and mortar of structures like muscle. Genes also shape organisms by driving the replication, movement, activity and death of cells.
4. What can go wrong with the process?
The genetic code is so precise that even a change in a single DNA base can have profound effects. The mutation which causes the disease sickle cell anaemia, for example, was tracked down to the substitution of a T for an A in the gene for the protein haemoglobin, with carries oxygen in red blood cells. As a result, a single protein building block called an amino acid is changed, resulting in a crippled protein.
Sometimes the problem is not the gene sequence, but the location or number of genes. Whole regions of chromosomes can be missing or duplicated, resulting in missing genes or inappropriate activity. Cancer cells, for instance, often have the wrong number of entire chromosomes.
5. How are genes inherited?
Our genes and the 23 pairs of chromosomes they reside on are inherited, with one of each pair coming from each parent. This means that sperm and eggs must contain half the number of chromosomes of any other cell in the body. Otherwise when sperm and egg fused to form an embryo it would contain twice the number of chromosomes needed.
Sperm and eggs receive their half-portion of genes through a chromosomal choreography called meiosis. First the 23 parental pairs of chromosomes match up at the centre of the sex cell. When the cell divides, each daughter receives only one half of each pair. Since this process is random, it generates a staggering 70 trillion possible combinations of chromosomes in the offspring.
In fact, the true degree of possible variation is higher because the maternal and paternal chromosomes exchange DNA when they pair, creating new gene combinations within the chromosomes. So rest assured of your genetic uniqueness – unless you are an identical twin.
6. What other factors control how our genes work?
It seems reasonable that if two genes with the same sequence are in the same cell, they should act the same way. But that is not always true. So-called epigenetic factors can alter how a gene works regardless of its DNA sequence.
One well studied example is parental imprinting. Certain genes are marked with chemical tags via a process called methylation while they are still in a sperm or egg, meaning that only the maternal or paternal copy is active in the offspring. As a result, certain traits are inherited exclusively from one side of the family.
There is also some evidence that environment can influence epigenetic factors. For example, Dutch women who were pregnant during the famines of the World War II gave birth to small babies. But, surprisingly, the next generation also spawned small babies even though they ate well, as if they “inherited” their mother’s experience.
7. Do genes control everything about an organism, or is environment important?
The debate over the relative importance of nature and nurture would fill several encyclopedias, but modern genetics predicts that both should play a role. That should come as no surprise to anyone who views genes as a piece of cellular machinery. Dangerous chemicals, such as cigarette smoke, can jam that machinery or interfere in its workings.
Equally, a therapeutic environment can compensate for a faulty gene. For example, babies who are born with the disease phenylketonuria (PKU) lack an enzyme that metabolises the amino acid phenylalanine. It therefore builds up to toxic levels causing mental retardation. But babies are now screened for the defect at birth and those with two copies of the defective gene are given special diets low in phenylalanine. As a result, they develop normally.
8. How genetically similar are we to primates and other organisms?
Chimpanzee genes differ, on average, by roughly just 1% from human genes. Other apes’ genes are 95% to 98% identical to ours, too. Rodent genes are 88% identical and chickens come in at 75% identical.
Once you leave the animal kingdom, wholesale comparisons between human genes and those of other species becomes trickier. About one-third of the genome of the fruit fly Drosophila melanogaster, for example, contains genes that are only shared by other arthropods, and one-quarter of human genes are shared only by vertebrates.
The function of some genes in flies, plants or worms appear close enough to their human counterparts that these animals can serve as models to study human biology and disease.
9. Most traits are controlled by a complex array of genes. But which human features depend entirely on single genes?
You already know some of your single gene traits like the back of your hand. Specific versions of different single genes cause: hair growth on the middle segments of the fingers, the top of the little finger to bend dramatically to the ring finger, and determine whether the left thumb crosses over the right – or vice versa – when fingers are interlocked.
The result of other single gene traits are as plain as the eyes, ears, and hair on your face. People with blue eyes, non-dangly ear lobes or a straight hairline have inherited specific gene varieties. The ability to roll your tongue into a tube and also to taste certain bitter chemicals is also conferred by certain types of single genes.
Defective versions of single genes can also cause disease such as cystic fibrosis, sickle cell anaemia and Huntington’s disease.
More on these topics: | https://www.newscientist.com/article/dn9965-faq-genetics/ |
4.09375 | Phosphate in Blood
A phosphate test measures the amount of phosphate in a blood sample. Phosphate is a charged particle (ion) that contains the mineral phosphorus. The body needs phosphorus to build and repair bones and teeth, help nerves function, and make muscles contract. Most (about 85%) of the phosphorus contained in phosphate is found in bones. The rest of it is stored in tissues throughout the body.
The kidneys help control the amount of phosphate in the blood. Extra phosphate is filtered by the kidneys and passes out of the body in the urine. A high level of phosphate in the blood is usually caused by a kidney problem.
The amount of phosphate in the blood affects the level of calcium in the blood. Calcium and phosphate in the body react in opposite ways: as blood calcium levels rise, phosphate levels fall. A hormone called parathyroid hormone (PTH) regulates the levels of calcium and phosphorus in your blood. When the phosphorus level is measured, a vitamin D level, and sometimes a PTH level, is measured at the same time. Vitamin D is needed for your body to take in phosphate.
The relation between calcium and phosphate may be disrupted by some diseases or infections. For this reason, phosphate and calcium levels are usually measured at the same time.
Why It Is Done
A test to measure phosphate in blood may be done to:
How To Prepare
Many medicines can change the results of this test. Be sure to tell your doctor about all the nonprescription and prescription medicines you take, including vitamin D supplements.
Talk to your doctor about any concerns you have regarding the need for the test, its risks, how it will be done, or what the results will mean. To help you understand the importance of this test, fill out the medical test information form(What is a PDF document?).
How It Is Done
The health professional taking a sample of your blood will:
In a newborn baby, the blood sample is usually taken from the heel (heel stick).
For a heel stick blood sample, several drops of blood are collected from the heel of your baby. The skin of the heel is first cleaned with alcohol and then punctured with a small sterile lancet. Several drops of blood are collected in a small tube. When enough blood has been collected, a gauze pad or cotton ball is placed over the puncture site. Pressure is maintained on the puncture site briefly, and then a small bandage is usually put on.
How It Feels
You may feel nothing at all from the needle puncture, or you may feel a brief sting or pinch as the needle goes through the skin. Some people feel a stinging pain while the needle is in the vein. But many people do not feel any pain (or have only minor discomfort) once the needle is positioned in the vein. The amount of pain you feel depends on the skill of the health professional drawing your blood, the condition of your veins, and your sensitivity to pain.
A brief pain, like a sting or a pinch, is usually felt when the lancet punctures the skin. Your baby may feel a little discomfort with the skin puncture.
There is very little risk of complications from having blood drawn from a vein. You may develop a small bruise at the puncture site. You can reduce the risk of bruising by keeping pressure on the site for several minutes after the needle is withdrawn.
In rare cases, the vein may become inflamed after the blood sample is taken. This condition is called phlebitis and is usually treated by applying a warm compress several times daily.
Continued bleeding can be a problem for people with bleeding disorders. Aspirin, warfarin (Coumadin), and other blood-thinning medicines can also make bleeding more likely. If you have bleeding or clotting problems, or if you take blood-thinning medicine, tell the health professional before your blood is drawn.
There is very little risk of a serious problem developing from a heel stick. A small bruise may develop at the puncture site.
Continued bleeding can be a problem for babies with bleeding disorders. There is a possibility that a bleeding problem may be discovered while collecting the blood for this test.
A phosphate test measures the amount of phosphate in a blood sample. Phosphate is a charged particle (ion) that contains the mineral phosphorus. Phosphate levels are usually higher in children than in adults because of the active bone growth occurring in children.
Results are usually available in 1 to 2 hours.
The normal values listed here—called a reference range—are just a guide. These ranges vary from lab to lab, and your lab may have a different range for what's normal. Your lab report should contain the range your lab uses. Also, your doctor will evaluate your results based on your health and other factors. This means that a value that falls outside the normal values listed here may still be normal for you or your lab.
High phosphate levels may be caused by:
Low phosphate levels may be caused by:
What Affects the Test
Results from a blood phosphate test may be affected by:
What To Think About
eMedicineHealth Medical Reference from Healthwise
To learn more visit Healthwise.org
© 1995-2014 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
- Early Care for Your Premature Baby
- What to Eat When You Have Cancer
- When to Take More Pain Medication | http://www.emedicinehealth.com/script/main/art.asp?articlekey=128988&ref=130132 |
4.3125 | Using music to enhance student learning : a practical guide for elementary classroom teachers
- Jana R. Fallin, Mollie G. Tower.
- New York : Routledge, 2011.
- Physical description
- xvi, 290 p. : ill. ; 28 cm +1 sound disc (4 3/4 in.) +1 song book (47 p. : music ; 23 cm.)
Education Library (Cubberley)
- Library has: 1 v. + 1 sound disc + 1 song book
MT1 .F15 2011
- I. Getting Started 1. Organizing for Successful Teaching 2. A Framework for Teaching and Learning 3. INSIDE the Music: The Basic Elements of Music II. Somos Musicos: Doing What Musicians Do 4. Listening 5. Performing 6. Creating III. Integrating Music Into the Curriculum 7. Using Music to Enhance Learning in Language Arts 8. Using Music to Enhance Learning in Science 9. Using Music to Enhance Learning in Math 10. Using Music to Enhance Learning in Social Studies 11. Relating Music to the Other Arts 12. Favorite Teaching Tips.
- (source: Nielsen Book Data)
- Publisher's Summary
- Integrating musical activities in the elementary school classroom can assist in effectively teaching and engaging students in Language Arts, Science, Math, and Social Studies, while also boosting mental, emotional and social development. However, many elementary education majors fear they lack the needed musical skills to use music successfully. Future elementary school teachers need usable, practical musical strategies to easily infuse into their curriculum. Written for both current and future teachers with little or no previous experience in music, Using Music to Enhance Student Learning offers strategies that are not heavily dependent on musical skills. While most textbooks are devoted to teaching music theory skills, this textbook is dedicated to the pedagogical aspect of music. The ultimate goal is for elementary school children to leave the classroom with an introductory appreciation of music in a joyful, creative environment, and perhaps wanting to broaden their understanding. SPECIAL FEATURES Listening Maps help listeners focus on music selections through a clear visual representation of sound. Group Activities reinforce the social aspects of music-making, as well as the benefits of collaborative teaching and learning. Thorough integration of music establishes that music is essential in a child's development, and that the integration of music will significantly enhance all other subjects/activities in the classroom. Learning Aids including Tantalizing Tidbits of Research, Teaching Tips, Thinking It Through activities, Suggestions for Further Study, and a list of Recommended Student Books provide useful resources at the end of each chapter. THE USING MUSIC PACKAGE There are several special resources for both students and teachers: An Audio CD with listening selections from the Baroque, Classical, Romantic and Contemporary Periods. Get America Singing!Again! Volume 1 contains 43 songs that represent America's varied music heritage of folksongs, traditional songs and patriotic songs. The Appendices: A songbook with an emphasis on Hispanic folksongs A recorder music songbook Samples of key assignments and lists of music-related books for children and for teachers The Companion Web site Links to education resources Instructors' resources, including sample syllabi, quizzes, student handouts, and assignment rubrics.
(source: Nielsen Book Data)
- Publication date
- With: Get America singing --again / a project of the Music Educators National Conference . Hal Leonard, c1996.
- 9780415878234 (set)
- 0415878233 (set)
- 9780415894739 (spiral : pbk.)
- 0415894735 (spiral : pbk.)
- 9780203838617 (ebk.)
- 0203838610 (ebk.)
- 9780415887960 (CD)
- 0415887968 (CD) | https://searchworks.stanford.edu/view/9078938 |
4.03125 | For schools14 December 2015
Many school communities are already aware of the important links between food, health and learning and are taking steps to improve their food and nutrition environments. Making healthy foods and drinks readily available within the school environment will encourage students to make healthy choices and will significantly contribute to improved nutrition in children and young people. Consuming healthy foods and drinks every day not only improves students’ overall health but can also improve their learning and behaviour.
Role of school canteen or school food service
The school canteen plays an important role. It enables children and young people to act on the healthy eating messages learned in the classroom by selecting from food and drink choices that are healthy, look and taste good, and are affordable – a great way to encourage healthy eating habits. The canteen is one of the best places to role model healthy eating habits.
Why do we need a system to classify foods and drinks?
The 2002 National Children’s Nutrition Survey of school children 5–14 years old (Ministry of Health 2003) highlighted the importance of the school environment. The survey found that 32 percent of daily energy intake was consumed by the children during school hours. Approximately half of the schoolchildren surveyed bought some of the food they consumed from the school canteen or tuck shop, with 5 percent of children buying most of their food there.
Only 60 percent of the schoolchildren surveyed ate the recommended three or more servings of vegetables, and 40 percent ate the recommended two or more servings of fruit each day. Good nutrition and healthy eating practices in childhood are important in shaping lifelong behaviours as well as affecting overall health and wellbeing.
Food and nutrition guidelines
The Food and Beverage Classification System is based on the Ministry of Health’s background papers Food and Nutrition Guidelines for Healthy Children Aged 2–12 Years (1997) and Food and Nutrition Guidelines for Healthy Adolescents (1998). A range of food and nutrition pamphlets is available to order or download from www.healthed.govt.nz including the Ministry of Health brochures Eating for Healthy Children Aged 2 to 12 (revised 2005; reference code 1302) and Eating for Healthy Teenagers (revised 2006;reference code 1230).
Eating for healthy children and young people
- Eat a variety of foods from the four food groups each day.
- Eat enough for growth and physical activity.
- Choose foods low in fat, sugar and salt.
- Choose snacks well.
- Drink plenty every day.
- Avoid alcohol. | http://www.fuelled4life.org.nz/for-schools |
4.6875 | In Plate Tectonic Theory, the lithosphere is broken into tectonic plates, which undergo some large scale motions. The boundary regions between plates are aptly called plate boundaries. Based upon their motions with respect to one another, these plate boundaries are of three kinds: divergent, convergent, and transform.
Divergent boundaries are those that move away from one another. When they separate, they form what is known as a rift. As the gap between the two plates widen, the underlying layer may be soft enough for molten lava underneath to push its way upward. This upward push results in the formation of volcanic islands. Molten lava that succeeds in breaking free eventually cools and forms part of the ocean floor.
Some formations due to divergent plate boundaries are the Mid-Atlantic Ridge and the Gakkel Ridge. On land, you have Lake Baikal in Siberia and Lake Tanganyika in East Africa.
Convergent boundaries are those that move towards one another. When they collide, subduction usually takes place. That is, the denser plate gets subducted or goes underneath the less dense one. Sometimes, the plate boundaries also experience buckling. Convergent boundaries are responsible for producing the deepest and tallest structures on Earth.
Among those that have formed due to convergent plate boundaries are K2 and Mount Everest, the tallest peaks in the world. They formed when the Indian plate got subducted underneath the Eurasian plate. Another extreme formation due to the convergent boundary is the Mariana Trench, the deepest region on Earth.
Transform boundaries are those that slide alongside one another. Lest you imagine a slippery, sliding motion, take note that the surfaces involved are exposed to huge amounts of stress and strain and are momentarily held in place. As a result, when the two plates finally succeed in moving with respect to one another, huge amounts of energy are released. This causes earthquakes.
The San Andreas fault in North America is perhaps the most popular transform boundary. Transform boundary is also known as transform fault or conservative plate boundary.
Movements of the plates are usually just a few centimeters per year. However, due to the huge masses and forces involved, they typically result in earthquakes and volcanic eruptions. If the interactions between plate boundaries involve only a few centimeters per year, you could just imagine the great expanse of time it had to take before the land formations we see today came into being.
You can read more about plate boundaries here in Universe Today. Here are the links:
Here are the links of two more articles from USGS:
Here are two episodes at Astronomy Cast that you might want to check out as well: | http://www.universetoday.com/39760/plate-boundaries/ |
4 | – In order to get many chemicals to function the way we want them to, we need them to change. Example: no one would eat a raw hamburger. By applying heat to the ground beef, it turns into its finished product
– Physical changes involve a change in state. Do you remember what the three states of matter are? Solid, liquid, and gas.
– In a physical change, the chemical substances DOES NOT become a new substance.
– Example: water can change state from being a solid (ice) to a liquid (water) to a gas (water vapour).
– Physical changes can always be reversed.
– There are six changes of state (which are physical changes)
Melting – This turns a solid into a liquid. Water melts at 0oC.
Freezing – This is the opposite process of melting. This turns a liquid into a solid. Water freezes at 0oC.
Condensation – This turns a gas into a liquid. Water condenses at 100oC.
Evaporation – This is the opposite process of condensation. This turns a liquid into a gas. Water evaporates at 100oC.
Sublimation – This is a much rarer physical change that results from a substance going through a massivetemperature change. This process turns a solid into a gas. One example of this is when dry ice (frozen carbon dioxide (therefore a solid)) is exposed to room temperature. The temperature change is so large that the solid skips the liquid phase and moves right into being a gas.
Deposition – This is another rare physical change. It is the opposite of sublimation, a gas will change into a solid.
– A chemical change results in at least one new substance being created. The old substance becomes a completely different substance.
– Chemical changes are very difficult to reverse.
– The chemical change you are most familiar with is called “combustion." Combustion occurs when heat is applied to a chemical. For example, lighting a match, fireworks, baking a cake. As you can imagine, it is impossible to reverse these reactions. Once you cook a hamburger, you cannot uncook it and start off with fresh ground beef again.
– Another chemical change you are familiar with is rusting. Rusting occurs when metals such as iron react with the oxygen gas in the air. The iron and the oxygen combine to form a new substance called iron oxide, or rust.
– Chemical changes are very hard to observe. Physical changes are easy to observe; you can watch an ice cube melt and see how the solid is turning into a liquid. When rust forms (a chemical change), you cannot see the individual molecules changing into a new substance, you can only see the final product, which is rust.
St. Rosemary Educational Institution. "Physical and Chemical Changes." http://schoolworkhelper.net/. St. Rosemary Educational Institution, Last Update: 2016. Web. Retrieved on: Monday 8th February 2016. http://schoolworkhelper.net/physical-and-chemical-changes/. | http://schoolworkhelper.net/physical-and-chemical-changes/ |
4.3125 | This reading comes from the resource Facing History and Ourselves: Holocaust and Human Behavior.
Some sociologists study the effects of the idea of "race" on human behavior. They also explore the impact of ethnicity. An ethnic group is a distinctive group of people within a country. Members share a cultural heritage. Ethnicity can be the basis for feelings of pride and solidarity. But, like race, it can also be the basis for prejudice and discrimination.
The word prejudice comes from the word pre-judge. We pre-judge when we have an opinion about a person because of a group to which that individual belongs. A prejudice has the following characteristics.
- It is based on real or imagined differences between groups.
- It attaches values to those differences in ways that benefit the dominant group at the expense of minorities.
- It is generalized to all members of a target group.
Discrimination occurs when prejudices are translated into action. For example, a person who says that all Mexicans are lazy is guilty of prejudice, but one who refuses to hire a Mexican is guilty of discrimination. Not all prejudices result in discrimination. Some are positive. But, whether positive or negative, prejudices have a similar effect - they reduce individuals to categories or stereotypes. A stereotype is a judgment about an individual based on the real or imagined characteristics of a group. Joseph H. Suina, a professor of education and a member of the Cochiti Pueblo, recalls the effects stereotyping had on his behavior in the Marines.
From the moment my comrades in the military discovered I was an Indian, I was treated differently. My name disappeared. I was no longer Suina, Joseph, or Joe. Suddenly, I was Chief, Indian, or Tonto. Occasionally, I was referred to as Geronimo, Crazy Horse or some other well-known warrior from the past. It was almost always with an affection that develops in a family, but clearly, I was seen in the light of stereotypes that my fellow Marines from around the country had about Native Americans.
Natives were few in the Marine Corps. Occasionally, I’d run across one from another battalion. Sure enough, just like me, each of them was “Chief” or “Indian.” Machismo is very important in the Corps and names such as Chief and Crazy Horse were affirmations of verydesirable qualities for those entering combat situations. Good warriors, good fighting men, we were to be skilled in reading the land, notable for our physical prowess, renowned for our bravery. In addition, we were to drink to the point of total inebriation or to be in the midst of a barroom brawl before the night was over. Never permitted to assume leadership, but always in the role of supportive and faithful companion, just like the Lone Ranger’s Tonto.
Personally, I was anything but combatant, and my experiences with alcohol had been limited to two or three beers prior to my enlistment. Never in my wildest dreams had I imagined that I would be accorded the characteristics of a noble and reckless warrior. Since these traits were held in such high esteem, I enjoyed the status and acceptance they afforded me among the men. My own platoon commander singled me out to compete in a rope-climbing event at a regimental field meet. After I easily won that contest (my Pueblo life had included a great deal of wood chopping), my stature as chief increased.
I actually began to believe that I had those qualities and started behaving in accord with the stereotypes. Later during my two tours of duty in Vietnam, I played out my expected role quite well. I went on twice as many search and destroy missions as others; I took “the point” more often than anyone else. After all, couldn’t I hear, see, smell, and react to signs of the enemy better than any of my comrades? On shore leave, I learned to drink with the best of them and always managed to find trouble.
Almost a full year beyond my four years of enlistment, I was recovered from my second set of wounds and finally discharged. I had earned two purple hearts, a bronze star, the Gallantry Cross (Vietnam’s highest military award at the time), and numerous other combat expedition medals. I also had, on my record, time in jails in Japan, the Philippines, and Mexico.
Over twenty years later, Jeanne Park, a student at Stuyvesant High School in New York City, had a similar experience with stereotypes.
Who am I?
For Asian-American students, the answer is a diligent, hardworking and intelligent young person. But living up to this reputation has secretly haunted me.
The labeling starts in elementary school. It’s not uncommon for a teacher to remark, “You’re Asian, you’re supposed to do well in math.” The underlying message is, “You’re Asian and you’re supposed to be smarter.”
Not to say being labeled intelligent isn’t flattering, because it is, or not to deny that basking in the limelight of being top of my class isn’t ego-boosting, because frankly it is. But at a certain point, the pressure became crushing. I felt as if doing poorly on my next spelling quiz would stain the exalted reputation of all Asian students forever.
So I continued to be an academic overachiever, as were my friends. By junior high school I started to believe I was indeed smarter. I became condescending toward non-Asians. I was a bigot; all my friends were Asians. The thought of intermingling occurred rarely if ever.
My elitist opinion of Asian students changed, however, in high school. As a student at what is considered one of the nation’s most competitive science and math schools, I found that being on top is no longer an easy feat.
I quickly learned that Asian students were not smarter. How could I ever have believed such a thing? All around me are intelligent, ambitious people who are not only Asian but white, black and Hispanic.
Superiority complexes aside, the problem of social segregation still exists in the schools. With few exceptions, each race socializes only with its “own kind.”
Students see one another in the classroom, but outside the classroom there remains distinct segregation.
Racist lingo abounds. An Asian student who socializes only with other Asians is believed to be an Asian Supremacist or, at the very least, arrogant and closed off. Yet an Asian student who socializes only with whites is called a “twinkie,” one who is yellow on the outside but white on the inside.
A white teenager who socializes only with whites is thought of as prejudiced, yet one who socializes with Asians is considered an “egg,” white on the outside and yellow on the inside.
These culinary classifications go on endlessly, needless to say, leaving many confused, and leaving many more fearful than ever of social experimentation. Because the stereotypes are accepted almost unanimously, they are rarely challenged. Many develop harmful stereotypes of entire races. We label people before we even know them.
Labels learned at a young age later metamorphose into more visible acts of racism. For example, my parents once accused and ultimately fired a Puerto Rican cashier, believing she had stolen $200 from the register at their grocery store. They later learned it was a mistake. An Asian shopkeeper nearby once beat a young Hispanic youth who worked there with a baseball bat because he believed the boy to be lazy and dishonest.
We all hold misleading stereotypes of people that limit us as individuals in that we cheat ourselves out of the benefits different cultures can contribute. We can grow and learn from each culture whether it be Chinese, Korean or African-American.
Just recently some Asian boys in my neighborhood were attacked by a group of young white boys who have christened themselves the Master Race. Rather than being angered by this act, I feel pity for this generation that lives in a state of bigotry.
It may be too late for our parents’ generation to accept that each person can only be judged for the characteristics that set him or her apart as an individual. We, however, can do better. | https://www.facinghistory.org/for-educators/educator-resources/readings/stereotyping |
4.25 | 1. Nature can do it
If you look way, way back in our universe’s history, you get to the Big Bang. Our universe has been expanding since then (roughly 13.7 billion years) and, as determined by various cosmology models, in the very early periods of the universe, there was explosive inflation. During those periods of time, points in space would have been moving away from each other very, very rapidly. The universe is still expanding today, so we know that space flexes naturally. The question is, can we do it?
2. Alcubierre’s hypothesis
In 1994, physicist Miguel Alcubierre theorized that faster-than-light speed was possible, in a way that did not contradict Einstein. It involves the contraction and expansion of space. Here’s how it works: by manipulating spacetime, it might be possible for, say, a spaceship to generate a “warp bubble” that would expand space on one side of the craft and contract it on the opposite side, creating a sort of wave that would push the ship forward. There’s still the question of how to manipulate spacetime that way, but that’s a technicality.
The closer to light speed a massive object is traveling, the more energy it takes to make it move. Imagine how much energy it would take to make a spaceship go even faster. Yeah. Technically, it wouldn’t quite be infinity, because the spaceship wouldn’t surpass the speed of light in a local region of space, but it would still be a lot. However, there may be ways to make it more efficient, options that many, including Dr. Harold White of NASA, are exploring.
4. Warping trajectories
Experiments that are currently underway explore the possibility of warping the trajectories taken by photons to allow them to travel greater distances without sacrificing speed by means of folding time and space around them. True, photons are a long cry from starships, but it’s one more step in the right direction.
This doesn’t quite fall under the idea of warp drive, but wormholes are another hypothetical way that a spacecraft might achieve faster-than-light speeds. The idea here is that the ship, or some exterior mechanism, would somehow create a tunnel through spacetime, entering the wormhole at a speed slower than light and reappearing at a location many light-years away. In his paper “Faster-Than-Light Space Warps, Status and The Next Steps,” astrophysicist Eric Davis described a wormhole entrance as “a sphere that contained the mirror image of a whole other universe or remote region within our universe, incredibly shrunken and distorted.” Whether or not it is technically viable remains to be seen.
6. Negative energy
Of course, a natural question is, how could we possibly manage to bend or warp (or whatever verb you prefer) spacetime? Some physicists believe (and some preliminary experiments seem to confirm) that the answer lies in negative energy. Negative energy has been successfully produced in a lab via what’s called the Casimir effect: distorting the electromagnetic fluctuations in vacuum. Theoretically, it might be possible to harness these distortions (and the negative energy they produce) at a single point and there have a wormhole.
It may be crazy and unachievable for now, but we have to keep in mind how swiftly the rate of technological innovation is increasing. Forty years ago, Star Trek had cell phones, Kirk flipping open a communicator and talking into it, and that was the future to look forward to. Now, we have phones that not only allow us to communicate with each other from wherever we are but also allow us to surf the internet, read books, take pictures and videos, play games, etc. The postulate that none may go faster than the speed of light, like some interstellar speed limit, may be a solid law for now - but hey, rules were made to be broken.
- India's telecom authorities have ruled against Free Basics, Facebook's controversial plan to offer free but limited internet.
- BuzzFeed News has identified a second member of the ISIS execution cell led by "Jihadi John."
- Super Bowl 50 recap: The Denver Broncos upset the Carolina Panthers. And Beyoncé slayed her half-time show 🏈 👑 | http://www.buzzfeed.com/azazello/warp-two-mr-sulu-how-warp-drive-might-be-possib-cqjf |
4.125 | Scoliosis is a spinal deformity that causes an abnormal curvature of the spine. A normal spine when viewed from behind appears straight. A spine affected by scoliosis shows a lateral curvature, giving the appearance that the person is leaning to one side. Often the spine will be shaped like an “S” or a “C.” The curvature must be 10 degrees or greater to be considered scoliosis. Three to five of every 1,000 children will develop some form of spinal curve severe enough to require treatment.
Doctors use different terms to describe the different curvatures of the spine affected by scoliosis:
- Dextroscoliosis – The spine curves to the right. This deformity usually occurs in the middle to upper section of the back (thoracic spine). This is the most common type of curve and can occur on its own to form a “C” shape, or with another curve to form an “S” shape.
- Levoscoliosis– Spinal curve is to the left, typically occurring in the lower section of the spine (lumbar). In rare instances when this curve occurs in the thoracic spine, it may be the result of a spinal cord tumor or other spinal abnormalities.
The following are some possible symptoms of scoliosis:
- Difference in shoulder height
- Difference in hip height or position
- Difference in shoulder blade height or position
- Head not centered properly with the body
- Difference in the way the arms hang alongside the body even when the person is standing straight
- Sides of the back appear uneven in height when the person bends over.
In more severe cases, you might experience these symptoms of scoliosis:
- Shortness of breath or difficulty breathing
- Chest pain
- Leg pain
- Back pain
- Changes in bowel or bladder habit with difficulty controlling bowel or bladder function
The majority (85 percent) of all scoliosis cases are classified as idiopathic scoliosis, meaning they have no known cause. The remaining 15 percent of scoliosis cases are linked to causes such as:
- Neuromuscular conditions – Cerebral palsy or muscular dystrophy can possibly lead to scoliosis.
- Birth defects – During fetal development, either the bones of the spine fail to separate from one another or they do not form completely
- Arthritis – Scoliosis caused by arthritis is a degenerative form of the condition and usually occurs only in older adults.
- Spinal conditions – Conditions that affect the spine and can lead to scoliosis can include osteoporosis, vertebral compression fractures, disc degeneration and spondylosis.
- Injury – Trauma to the spine
- Nonstructural scoliosis – The spine is structurally normal and the curve is actually temporary. The condition can be corrected once the underlying cause is identified and treated.
- Structural scoliosis – The spine has a fixed and more permanent curve. This condition is usually the result of a birth defect, infection, disease or injury.
- Idiopathic scoliosis – This type of scoliosis refers to the 80 to 85 percent of scoliosis cases for which the cause is unknown. Idiopathic scoliosis is divided into three categories, depending on age range:
- Infantile idiopathic scoliosis – Affects children under three years of age
- Juvenile idiopathic scoliosis – Affects children ages three to 10
- Adolescent idiopathic scoliosis – Affects children (more girls than boys) 10 years of age and older. It's the most common type of scoliosis with an unknown cause. This condition is very likely to run in families.
Treatments for scoliosis depend upon the severity of the condition and range from nonsurgical treatments such as pain medications, observation and bracing to more invasive treatments like spinal fusion surgery.
The multidisciplinary team of spine experts at Northwell Health Orthopaedic Institute treats scoliosis as well as a broad range of spine conditions that can occur at any stage of life. | https://www.northwell.edu/find-care/conditions-we-treat/scoliosis |
4.21875 | - Not to be confused with planetary core in the core accretion theory, referring to a central accretionary body surrounded by a halo of dust and gas that serves to trap debris and increase the rate of accretion.
The planetary core consists of the innermost layer(s) of a planet; which may be composed of solid and liquid layers. Cores of specific planets may be entirely solid or entirely liquid. In the Solar System, core size can range from about 20% (Moon) to 85% of a planet's radius (Mercury).
Gas giants also have cores, though the composition of these are still a matter of debate and range in possible composition from traditional stony/iron, to ice or to fluid metallic hydrogen. Gas giant cores are proportionally much smaller than those of terrestrial planets, though theirs can be considerably larger than the Earth's nevertheless; Jupiter has one 10–30 times heavier than Earth, and exoplanet HD149026 b has a core 67 times the mass of the Earth.
- 1 Discovery
- 2 Formation
- 3 Chemistry
- 4 Dynamics
- 5 Observed types
- 6 References
In 1798, Henry Cavendish calculated the average density of the earth to be 5.48 times the density of water (later refined to 5.53), this led to the accepted belief that the Earth was much denser in its interior. Following the discovery of iron meteorites, Wiechert in 1898 postulated that the Earth had a similar bulk composition to iron meteorites, but the iron had settled to the interior of the Earth, and later represented this by integrating the bulk density of the Earth with the missing iron and nickel as a core. The first detection of Earth's core occurred in 1906 by Richard Dixon Oldham upon discovery of the P-wave shadow zone; the liquid outer core. By 1936 seismologists had determined the size of the overall core as well as the boundary between the fluid outer core and the solid inner core.
Planetary systems form from a flattened disk of dust and gas that accrete rapidly (within thousands of years) into planetesimals around 10 km in diameter. From here gravity takes over to produce Moon to Mars sized planetary embryos (105 – 106 years) and these develop into planetary bodies over an additional 10–100 million years.
Jupiter and Saturn most likely formed around previously existing rocky and/or icey bodies, rendering these previous primordial planets into gas-giant cores. This is the planetary core accretion model of planet formation.
Planetary differentiation is broadly defined as the development from one thing to many things; homogeneous body to several heterogeneous components. The hafnium-182/tungsten-182 isotopic system has a half-life of 9 million years, and is approximated as an extinct system after 45 million years. Hafnium is a lithophile element and tungsten is siderophile element. Thus if metal segregation (between the Earth's core and mantle) occurred in under 45 million years, silicate reservoirs develop positive Hf/W anomalies, and metal reservoirs acquire negative anomalies relative to undifferentiated chondrite material. The observed Hf/W ratios in iron meteorites constrain metal segregation to under 5 million years, the Earth's mantle Hf/W ratio places Earth's core as having segregated within 25 million years. Several factors control segregation of a metal core including the crystallization of perovskite. Crystallization of perovskite in an early magma ocean is an oxidation process and may drive the production and extraction of iron metal from an original silicate melt.
Impacts between planet-sized bodies in the early Solar System are important aspects in the formation and growth of planets and planetary cores.
The giant impact hypothesis states that an impact between a theoretical Mars-sized planet Theia and the early Earth formed the modern Earth and moon. During this impact the majority of the iron from Theia and the Earth became incorporated into the Earth's core.
Determining primary composition – Earth
Using the chondritic reference model and combining known compositions of the crust and mantle, the unknown component, the composition of the inner and outer core, can be determined; 85% Fe, 5% Ni, 0.9% Cr, 0.25% Co, and all other refractory metals at very low concentration. This leaves Earth's core with a 5–10% weight deficit for the outer core, and a 4–5% weight deficit for the inner core; which is attributed to lighter elements that should be cosmically abundant and are iron-soluble; H, O, C, S, P, and Si. Earth's core contains half the Earth's vanadium and chromium, and may contain considerable niobium and tantalum. Earth's core is depleted in germanium and gallium.
Weight deficit components – Earth
Sulfur is strongly siderophile and only moderately volatile and depleted in the silicate earth; thus may account for 1.9 weight % of Earth's core. By similar argument; phosphorus may be present up to 0.2 weight %. Hydrogen and carbon, however, are highly volatile and thus would have been lost during early accretion and therefore can only account for 0.1 to 0.2 weight % respectively. Silicon and oxygen thus make up the remaining mass deficit of Earth's core; though the abundances of each are still a matter of controversy revolving largely around the pressure and oxidation state of Earth's core during its formation. No geochemical evidence exists to include any radioactive elements in Earth's core. Despite this, experimental evidence has found potassium to be strongly siderophile given the temperatures associated with core formation, thus there is potential for potassium in planetary cores of planets, and therefore potassium-40 as well.
Isotopic composition – Earth
Hafnium/tungsten (Hf/W) isotopic ratios, when compared with a chondritic reference frame, show a marked enrichment in the silicate earth indicating depletion in Earth's core. Iron meteorites, believed to be resultant from very early core fractionation processes, are also depleted. Niobium/tantalum (Nb/Ta) isotopic ratios, when compared with a chondritic reference frame, show mild depletion in bulk silicate Earth and the moon.
Dynamo theory is a proposed mechanism to explain how celestial bodies like the Earth generate magnetic fields. The presence or lack of a magnetic field can help constrain the dynamics of a planetary core. Refer to Earth's magnetic field for further details. A dynamo requires a source of thermal and/or compositional buoyancy as a driving force. Thermal buoyancy from a cooling core alone cannot drive the necessary convection as indicated by modelling, thus compositional buoyancy (from changes of phase) is required. On Earth the buoyancy is derived from crystallization of the inner core (which can occur as a result of temperature). Examples of compositional buoyancy include precipitation of iron alloys onto the inner core and liquid immiscibility both, which could influence convection both positively and negatively depending on ambient temperatures and pressures associated with the host-body. Other celestial bodies that exhibit magnetic fields are Mercury, Jupiter, Ganymede, and Saturn.
Stability and instability
Small planetary cores may experience catastrophic energy release associated with phase changes within their cores. Ramsey, 1950 found that the total energy released by such a phase change would be on the order of 1029 joules; equivalent to the total energy release due to earthquakes through geologic time. Such an event could explain the asteroid belt. Such phase changes would only occur at specific mass to volume ratios, and an example of such a phase change would be the rapid formation or dissolution of a solid core component.
The following summarizes known information about the planetary cores of given non-stellar bodies.
Within the Solar System
Mercury has an observed magnetic field, which is believed to be generated within its metallic core. Mercury's core occupies 85% of the planet's radius, making it the largest core relative to the size of the planet in the Solar System; this indicates that much of Mercury's surface may have been lost early in the Solar System's history. Mercury has a solid silicate crust and mantle overlying a solid iron sulfide outer core layer, followed by a deeper liquid core layer, and then a possible solid inner core making a third layer.
|Element||Chondritic Model||Equilibrium Condensation Model||Pyrolitic Model|
The existence of a lunar core is still debated, however if it does have a core it would have formed synchronously with the Earth's own core at 45 million years post-start of the Solar System based on hafnium-tungsten evidence and the giant impact hypothesis. Such a core may have hosted a geomagnetic dynamo early on in its history.
The Earth has an observed magnetic field generated within its metallic core. The Earth has a 5–10% mass deficit for the entire core and a density deficit from 4–5% for the inner core. Fe/Ni value of the core is well constrained by chondritic meteorites. Sulfur, carbon, and phosphorus only account for ~2.5% of the light element component/mass deficit. No geochemical evidence exists for including any radioactive elements in the core. However, experimental evidence has found that potassium is strongly siderophile when dealing with temperatures associated with core-accretion, and thus potassium-40 could have provided an important source of heat contributing to the early Earth's dynamo, though in a lesser extent then on sulfur rich Mars. The core contains half the Earth's vanadium and chromium, and may contain considerable niobium and tantalum. The core is depleted in germanium and gallium. Core mantle differentiation occurred within the first 30 million years of Earth's history. Inner core crystallization timing is still largely unresolved.
Mars possibly hosted a core-generated magnetic field in the past. The dynamo ceased within 0.5 billion years of the planet's formation. Hf/W isotopes derived from the martian meteorite Zagami, indicate rapid accretion and core differentiation of Mars; i.e. under 10 million years. Potassium-40 could have been a major source of heat powering the early martian dynamo.
Core merging between proto-mars and another differentiated planetoid could have been as fast as 1000 years or as slow as 300,000 years (depending on the viscosity of both cores and mantles). Impact-heating of the Martian core would have resulted in stratification of the core and kill the martian dynamo for a duration between 150 and 200 million years. Modelling done by Williams, et al. 2004 suggests that in order for Mars to have had a functional dynamo, the Martian core was initially hotter by 150 K than the mantle (agreeing with the differentiation history of the planet, as well as the impact hypothesis), and with a liquid core potassium-40 would have had opportunity to partition into the core providing an additional source of heat. The model further concludes that the core of mars is entirely liquid, as the latent heat of crystallization would have driven a longer lasting (greater than one billion years) dynamo. If the core of Mars is liquid, the lower bound for sulfur would be five weight %.
Jupiter has a rock and/or ice core 10–30 times the mass of the Earth, and this core is likely soluble in the gas envelope above, and so primordial in composition. Since the core still exists, the outer envelope must have originally accreted onto a previously existing planetary core. Thermal contraction/evolution models support the presence of metallic hydrogen within the core in large abundances (greater than Saturn).
Saturn has an observed magnetic field generated within its metallic core. Metallic hydrogen is present within the core (in lower abundances than Jupiter). Saturn has a rock and or ice core 10–30 times the mass of the Earth, and this core is likely soluble in the gas envelope above, and therefore it is primordial in composition. Since the core still exists, the envelope must have originally accreted onto previously existing planetary cores. Thermal contraction/evolution models support the presence of metallic hydrogen within the core in large abundances (but still less than Jupiter).
A Chthonian planet results when a gas giant has its outer atmosphere stripped away by its parent star, likely due to the planet's inward migration. All that remains from the encounter is the original core.
Planets derived from stellar cores and diamond planets
Carbon planets; previously stars, are formed alongside the formation of a millisecond pulsar. The first such planet discovered was 18 times the density of water, and five times the size of Earth. Thus the planet cannot be gaseous, and must be composed of heavier elements that are also cosmically abundant like carbon and oxygen; making it likely crystalline like a diamond.
PSR J1719-1438 is a 5.7 millisecond pulsar found to have a companion with a mass similar to Jupiter but a density of 23 g/cm3, suggesting that the companion is an ultralow mass carbon white dwarf, likely the core of an ancient star.
Hot ice planets
Exoplanets with moderate densities (more dense than Jovian planets, but less dense than terrestrial planets) suggests that such planets like GJ1214b and GJ436 are composed of primarily water. Internal pressures of such water-worlds would result in exotic phases of water forming on the surface and within their cores.
- Solomon, S.C. (2007). "Hot News on Mercury's core". Science 316 (5825): 702–3. doi:10.1126/science.1142328. PMID 17478710. (subscription required)
- Williams, Jean-Pierre; Nimmo, Francis (2004). "Thermal evolution of the Martian core: Implications for an early dynamo". Geology 32 (2): 97–100. Bibcode:2004Geo....32...97W. doi:10.1130/g19975.1.
- Pollack, James B.; Grossman, Allen S.; Moore, Ronald; Graboske, Harold C. Jr. (1977). "A Calculation of Saturn’s Gravitational Contraction History". Icarus (Academic Press, Inc) 30: 111–128. Bibcode:1977Icar...30..111P. doi:10.1016/0019-1035(77)90126-9.
- Fortney, Jonathan J.; Hubbard, William B. (2003). "Phase separation in giant planets: inhomogeneous evolution of Saturn". Icarus (Academic Press) 164: 228–243. arXiv:astro-ph/0305031. Bibcode:2003Icar..164..228F. doi:10.1016/s0019-1035(03)00130-1.
- Stevenson, D. J. (1982). "Formation of the Giant Planets". Planet. Space Sci (Pergamon Press Ltd.) 30 (8): 755–764. Bibcode:1982P&SS...30..755S. doi:10.1016/0032-0633(82)90108-8.
- Sato, Bun'ei; al., et (November 2005). "The N2K Consortium. II. A Transiting Hot Saturn around HD 149026 with a Large Dense Core". The Astrophysical Journal (The American Astronomical Society) 633: 465–473. arXiv:astro-ph/0507009. Bibcode:2005ApJ...633..465S. doi:10.1086/449306.
- Cavendish, H. (1798). "Experiments to determine the density of Earth". Philosophical Transactions of the Royal Society of London 88: 469–479. doi:10.1098/rstl.1798.0022.
- Wiechert, E. (1897). "Uber die Massenverteilung im Inneren der Erde" [About the mass distribution inside the Earth]. Nachrichten der Königlichen Gesellschaft der Wissenschaften zu Göttingen, Mathematische-physikalische Klasse (in German): 221–243.
- Oldham, Richard Dixon (1906). "The constitution of the interior of the Earth as revealed by Earthquakes". G.T. Geological Society of London 62: 459–486.
- Transdyne Corporation (2009). J. Marvin Hemdon, ed. "Richard D. Oldham's Discovery of the Earth's Core". Transdyne Corporation.
- Wood, Bernard J.; Walter, Michael J.; Jonathan, Wade (June 2006). "Accretion of the Earth and segregation of its core". Nature Reviews (Nature) 441: 825–833. Bibcode:2006Natur.441..825W. doi:10.1038/nature04763.
- "differentiation". Merriam Webster. 2014.
- Halliday; N., Alex (February 2000). "Terrrestrial accretion rates and the origin of the Moon". Earth and Planetary Science Letters (Science) 176 (1): 17–30. doi:10.1016/s0012-821x(99)20317-9.
- "A new Model for the Origin of the Moon". SETI Institute. 2012.
- Monteaux, Julien; Arkani-Hamed, Jafar (November 2013). "Consequences of giant impacts in early Mars: Core merging and Martian Dynamo evolution". Journal of Geophysical Research: Planets (AGU Publications): 84–87.
- McDonough, W. F. (2003). "Compositional Model for the Earth's Core". Geochemistry of the Mantle and Core (Maryland: University of Maryland Geology Department): 547–568.
- Murthy, V. Rama; van Westrenen, Wim; Fei, Yingwei (2003). "Experimental evidence that potassium is a substantial radioactive heat source in planetary cores". letters to nature (Nature) 423: 163–167. Bibcode:2003Natur.423..163M. doi:10.1038/nature01560.
- Hauck, S. A.; Van Orman, J. A. (2011). "Core petrology: Implications for the dynamics and evolution of planetary interiors". The Smithosnian/NASA Astrophysics Data System (American Geophysical Union): 1–2.
- Edward R. D. Scott, "Impact Origins for Pallasites," Lunar and Planetary Science XXXVIII, 2007.
- Ramsey, W.H. (April 1950). "On the Instability of Small Planetary Cores". Royal Astronomical Society 110: 325–338. Bibcode:1950MNRAS.110..325R. doi:10.1093/mnras/110.4.325.
- NASA (2012). "MESSENGER Provides New Look at Mercury's Surprising Core and Landscape Curiosities". News Releases (The Woodlands, Texas: NASA): 1–2.
- Fegley, B. Jr. (2003). "Venus". Treatise on Geochemistry (Elsevier) 1: 487–507. Bibcode:2003TrGeo...1..487F. doi:10.1016/b0-08-043751-6/01150-6.
- Munker, Carsten; Pfander, Jorg A; Weyer, Stefan; Buchl, Anette; Kleine, Thorsten; Mezger, Klaus (July 2003). "Evolution of Planetary Cores and the Earth-Moon System from Nb/Ta Systematics". Science 301 (5629): 84–87. Bibcode:2003Sci...301...84M. doi:10.1126/science.1084662. PMID 12843390.
- ""Diamond" Planet Found; May be Stripped Star". National Geographic (National Geographic Society). 2011-08-25.
- Bailes, M.; et al. (September 2011). "Transformation of a Star into a Planet in a Millisecond Pulsar Binary". Science 333 (6050): 1717–1720. arXiv:1108.5201. Bibcode:2011Sci...333.1717B. doi:10.1126/science.1208890. PMID 21868629.
- "Hot Ice Planets". MessageToEagle. 2012-04-09. | https://en.wikipedia.org/wiki/Planetary_core |
4.0625 | |This article relies largely or entirely upon a single source. (November 2011)|
Rhizoids are simple hair-like protuberances that extend from the lower epidermal cells of bryophytes, Rhodophyta and pteridophytes. They are similar in structure and function to the root hairs of vascular land plants. Similar structures are formed by algae and some fungi. Rhizoids are formed from single cells, whereas roots are multicellular organs composed of multiple tissues that collectively carry out a common function.
Plants originated in water, from where they gradually migrated to land during their long course of evolution. In water or near it, plants could absorb water from their surroundings, with no need for any special absorbing organ or tissue. Additionally, in the primitive states of plant development, tissue differentiation and division of labor was minimal, thus the requirement for specialized water absorbing tissue was not required. Once plants colonized land however, they required specialized tissues to absorb water efficiently, and also to anchor themselves to the land.
Rhizoids absorb water by capillary action, in which water moves up between threads of rhizoids and not through each of them as it does in roots.
In fungi, rhizoids are small branching hyphae that grow downwards from the stolons that anchor the fungus to the substrate, where they release digestive enzymes and absorb digested organic material. That is why fungí are called heterotrophs by absorption. In land plants, rhizoids are trichomes that anchor the plant to the ground. In the liverworts, they are absent or unicellular, but multicelled in mosses. In vascular plants they are often called root hairs, and may be unicellular or multicellular.
- C.Michael Hogan. 2010. Fern. Encyclopedia of Earth. eds. Saikat Basu and C.Cleveland. National Council for Science and the Environment. Washington DC.
- "Rhizoids". The New Student's Reference Work. 1914.
|This fungus-related article is a stub. You can help Wikipedia by expanding it.|
|This botany article is a stub. You can help Wikipedia by expanding it.| | https://en.wikipedia.org/wiki/Rhizoid |
4.03125 | The discriminant of an equation gives an idea of the number of roots and the nature of roots of the equation. In other words, it "discriminates" between the possible solutions. The discriminant is the expression found under the square root part of the quadratic formula (that is, . The value of tells how many solutions, roots, or x-intercepts the quadratic equation will have.
To find the solutions, manipulate the quadratic equation to standard form (), determine a, b, and c, and plug those values into the discriminant formula.
Join Chegg Study and get:
Guided textbook solutions created by Chegg experts
Learn from step-by-step solutions for 9,000 textbooks in Math, Science, Engineering, Business and more
24/7 Study Help
Answers in a pinch from experts and subject enthusiasts all semester long
In math there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important math concepts and terms are, and even once you’ve identified them you still need to understand what they mean. To help you learn and understand key math terms and concepts, we’ve identified some of the most important ones and provided detailed definitions for them, written and compiled by Chegg experts. | http://www.chegg.com/homework-help/definitions/discriminants-27 |
4 | The Native American effect
It is clear that throughout many years there has been an exemption of treatment when talking about the Native Americans in the United States. Supposedly every individual is endowed with the right of freedom, equality, and of seeking for happiness, but Native Americans were treated irrationally. From the discovery of America, to the founding fathers and settlers, the treatment and attitude towards Native Americans has been unsettling at best. The colonial policies toward the Native Americans affected the Indians in ways that changed their relationship between their tribes and the new nation. Cabeza de Vaca, Roger Williams, Cotton Mather, and Benjamin Franklin all had certain views and preconceived notions when it came to the Native Americans. Amazingly enough the varying degree of each mans perspective is the basis on which we not only view the Native Americans today, but ultimately became the thesis on diversifying cultures and how we view them in society.
Alvar Nunez Cabeza de Vaca is best known as the first Spaniard to explore what we now consider to be southwestern United States. His accounts are considered especially interesting because it is one of the very first documents that illustrates interactions between American natives and explorers. Throughout de Vaca’s experiences with the Native Americans his attitude towards them grew increasingly sympathetic. Cabeza de Vaca seems to be in favor of this exploration by outwardly expressing superiority and pity towards the Indians while secretly appreciating their accommodating nature throughout the conquest in order to justify his entitlement to their land. “When the Indians took their leave of us they said they would do so as we commanded and rebuild their towns, if the Christians let them. And I solemnly swear that if they have not done so it is the fault of the Christians” (De Vaca The Norton Anthology American Lit. p.46). Cabeza de Vaca ultimately felt sympathetic towards the natives, he journeys out to claim land that is clearly in possession of the Indians, him and the other Spanish noblemen essentially steal the Indians' fortune.
Roger Williams was an American Protestant theologian, and the first American proponent of religious freedom and the separation of church and state. He was a student of Native American languages and an advocate for fair dealings with Native Americans. Having learned their language and customs, Williams gave up the idea of being a missionary and never baptized a single Indian. Having established a rapport with and understanding off the Native Americans, Williams became a “keen and sympathetic observer” of the native people. He called on Puritans to deal fairly with the Native Americans. “Williams nevertheless saw that the American Indians were no better or worse that the “rogues” who dealt with them, and that in fact they possessed a marked degree of civility” (Williams The Norton Anthology American Literature p.174).
During the late years of the 17th century, the Native Americans and Puritan settlers had struggled to get along. Due to their clashing views on political and cultural issues, neither faction regarded the other as a respectable group. Cotton Mather displays a totally antagonistic view towards the Native Americans. Mather proves a negative relationship between the natives and the settlers by displaying the barbarous behavior or violent actions of those whom they consider to be culpable of wickedness. I believe when Benjamin Franklin was writing about the Native Americans it was for people to read and see that they were being treated unfairly. At first glimpse he makes it seem like he agrees with what the "white people" were saying about the Indians, but that was not the case. “Savages we call them, because their manners differ from ours, which we think the perfection of civility; they think the same of theirs” (Franklin The Norton... | http://www.studymode.com/essays/Thomas-Paine's-Beliefs-660717.html |
4 | The Python interpreter can get its input from a number of sources: from a script passed to it as standard input or as program argument, typed in interactively, from a module source file, etc. This chapter gives the syntax used in these cases.
Complete Python programs
While a language specification need not prescribe how the language interpreter is invoked, it is useful to have a notion of a complete Python program. A complete Python program is executed in a minimally initialized environment: all built-in and standard modules are available, but none have been initialized, except for :mod:`sys` (various system services), :mod:`__builtin__` (built-in functions, exceptions and None) and :mod:`__main__`. The latter is used to provide the local and global namespace for execution of the complete program.
The syntax for a complete Python program is that for file input, described in the next section.
The interpreter may also be invoked in interactive mode; in this case, it does not read and execute a complete program but reads and executes one statement (possibly compound) at a time. The initial environment is identical to that of a complete program; each statement is executed in the namespace of :mod:`__main__`.
Under Unix, a complete program can be passed to the interpreter in three forms: with the :option:`-c` string command line option, as a file passed as the first command line argument, or as standard input. If the file or standard input is a tty device, the interpreter enters interactive mode; otherwise, it executes the file as a complete program.
All input read from non-interactive files has the same form:
This syntax is used in the following situations:
- when parsing a complete Python program (from a file or from a string);
- when parsing a module;
- when parsing a string passed to the :keyword:`exec` statement;
Input in interactive mode is parsed using the following grammar:
Note that a (top-level) compound statement must be followed by a blank line in interactive mode; this is needed to help the parser detect the end of the input.
There are two forms of expression input. Both ignore leading whitespace. The string argument to :func:`eval` must have the following form:
The input line read by :func:`input` must have the following form: | https://bitbucket.org/birkenfeld/sphinx/src/fc26cc90d5e88b8fbcecaa8eb0eda5b63cea9519/Doc-26/reference/toplevel_components.rst |
4.21875 | AP Chemistry/The Basics
You should remember everything here from your high-school level chemistry class.
- 1 Units and Measurement
- 2 States of Matter
- 3 History of Chemistry
- 4 Modern Atomic Theory
- 5 Electrons
- 6 VSEPR Theory
- 7 Hybrid Orbitals
- 8 Sigma and Pi Bonds
- 9 Resonance
- 10 The Periodic Table
- 11 The Quantum Numbers
- 12 Oxidation Numbers
- 13 Naming Compounds
Units and Measurement
- Fahrenheit is not used on the AP exam. Celsius (°C) and Kelvin (K) are used. Pure water freezes at 0° Celsius (273K) and boils at 100°C (373K).
- Digits 1 through 9 are significant, and so are zeroes in between them. For example, the number 209 has three significant figures.
- Zeroes to the right of all other digits are only significant if there is a decimal point written. 290 has 2 sig figs, 290. has three, and 290.0 has four.
Measured vs. Exact Numbers
Exact numbers are either defined numbers or result from a count. A dozen is defined as 12 objects. A pound is defined as 16 ounces. Measured numbers always have a limited number of significant digits. They are certain up to a limited number of digits. A mass reported as 12 grams is implied to be known to the nearest gram and not to the tenth of a gram.
The Mole and Avogadro's Number
12 grams of Carbon-12 contain exactly one mole (6.02 * 10^23) of molecules. This is a measured number known as Avogadro's number. It is easy to convert between atomic mass, grams, and particles using Avogadro's number.
Multiplying and Significant Figures
Multiplying measured numbers in Chemistry is not like multiplying in math. 5 * 92 equals 460 in math class, but it equals 500 in chemistry. This is because the 5 only has one significant figure, so the answer has to be rounded to one sig fig. If 5.0 and 92 were multiplied, on the other hand, the answer would be 460 in both subjects.
Adding and Significant Figures
- First, align all the numbers vertically, as if you were going to add them. DO NOT WRITE IN EXTRA ZEROS AS PLACEHOLDERS.
- Round to the smallest place that contains a digit in every number.
Example: 210 + 370. + 539
210 370. 539
1119 ≈ 1120
States of Matter
- Solid (s) - definite shape and volume. Vibrates in place, but does not flow.
- Fluids - take the shape of their container.
- Liquid (l) - definite volume
- Gas (g) - variable volume (compressible)
History of Chemistry
- Democritus - philosopher who made the idea of atoms.
- Antoine Lavoisier - discovered Law of Conservation of Mass, which states that mass does not appear or disappear in chemical reactions, only rearrange.
- John Dalton - first scientist to scientifically describe atomic theory.
- Matter is made from indestructible particles called atoms.
- Atoms of the same element are the same.
- Compounds are two or more atoms bonded together.
- Chemical reactions are the rearrangement of atoms.
- J.J. Thomson - discovers the electron.
- Robert Millikan - discovers the mass and charge of electrons.
- "Raisin Pudding model" - atoms are like pudding, with electrons as raisins.
- Ernest Rutherford - through his gold foil experiment, discovers the nucleus. Since most of the alpha particles he shot through the gold were not deflected, he concluded that most of an atom is empty space.
Modern Atomic Theory
Atoms are made up of protons, neutrons, and electrons. Protons and neutrons weigh approximately 1 AMU, and electrons have a negligible mass. Elements are determined by the number of protons in the atom, known as the atomic number. The number of neutrons varies, creating different isotopes of different mass. The atomic mass of an atom is the sum of its protons and neutrons, both of which are found in the nucleus.
Electrons are arranged into shells that surround the atom. Each shell has 1-4 subshells, which themselves have 1-7 orbitals, each of which holds two electrons.
|2||s, p||1 + 3 = 4|
|3||s, p, d||1 + 3 + 5 = 9|
|4||s, p, d, f||1 + 3 + 5 + 7 = 16|
Filling Electron Shells
- Aufbau principle - fill the lowest energy subshells first, in accordance with the following image:
- Exception: Elements within the Lanthanide and Actinide series contain nuances and do not strictly follow the gif pattern. Elements on either side of these series do adhere to the gif.
- Hund's rule - fill each orbital in a subshell with one electron before putting a second electron in any of those orbitals.
Writing Electron Configurations
E.g. sodium = 1s22s22p63s1
Valence shell electron pair repulsion (VSEPR) theory. Electrons in a compound will try to move as far apart from each other as possible. Bonded pairs repel more strongly than unbonded pairs.
The moving apart of electron pairs requires the hybridization of orbitals. These hybrids range from sp to sp3d2, having two to six pairs.
Sigma and Pi Bonds
- Sigma bond - forms in all compounds
- Pi bonds - one or more are formed per extra electron pair that is shared among two atoms. These bonds are weaker than sigma bonds.
Sometimes, there is more than one "correct" way to draw a substance. In reality, the structure of the substance is an average of the drawn variations.
The Periodic Table
You should already be familiar with this. Each row is called a period, and each column is a group or family. Nonmetals and metals are separated by a jagged line on the right side. (Hydrogen is also a non-metal). Elements that border the line are called metalloids, and share characteristics with both metals and nonmetals.
The Quantum Numbers
These four numbers are used to describe the location of an electron in an atom.
|Principal Quantum Number|
|Angular Momentum Quantum Number|
|Magnetic Quantum Number|
|Spin Quantum Number|
Principal Quantum Number (n)
Determines the shell the electron is in. The shell is the main component that determines the energy of the electron (higher n corresponds to higher energy), as well as nuclear distance (higher n means further from the nucleus). The row that an element is placed on the periodic table tells how many shells there will be. Helium (n = 1), neon (n = 2), argon (n = 3), etc.
Angular Momentum Quantum Number (l)
Also known as azimuthal quantum number. Determines the subshell the electron is in. Each subshell has a unique shape and a letter name. The s orbital is shaped like a sphere and occurs when l = 0. The p orbitals (there are three) are shaped like teardrops and occur when l = 1. The d orbitals (there are five) occur when l = 2. The f orbitals (there are seven) occur when l = 3. (By the way, when l = 4, the orbitals are "g orbitals", but they (and the l = 5 "h orbitals") can safely be ignored in general chemistry.)
This number also gives information as to what the angular node of an orbital is. A node is defined as a point on a standing wave where the wave has minimal amplitude. When applied to chemistry this is the point of zero-displacement and thus where no electrons are found. In turn angular node means the planar or conical surface in which no electrons are found or where there is no electron density.
Here are pictures of the orbitals. Keep in mind that they do not show the actual path of the electrons, due to the Heisenberg Uncertainty Principle. Instead, they show the area where the electron is most likely to occur (say, 90% of the probability). The two colors represent the two different spin numbers (the choice is arbitrary).
|S orbital →|
|P orbitals →|
|D orbitals →|
|F orbitals →|
Magnetic Quantum Number (ml)
Determines the orbital in which the electron lies. For example, there are three p orbitals in shell n = 2: the magnetic quantum number determines which one of these orbitals the electrons reside in. The different orbitals are oriented at different angles around the nucleus. See how each p orbital has the same general shape, but they point in different directions around the nucleus.
Spin Quantum Number (ms)
Determines the spin on the electron. +½ corresponds to the up arrow in an electron configuration box. If there is only one electron in an orbital (one arrow in one box), then it is always considered +½. The second arrow, or down arrow, is considered -½.
Let's examine the quantum numbers of electrons from a magnesium atom, 12Mg. Remember that each list of numbers corresponds to (n, l, ml, ms).
|Two s electrons:||(1, 0, 0, +½)||(1, 0, 0, -½)|
|Two s electrons:||(2, 0, 0, +½)||(2, 0, 0, -½)|
|Six p electrons:||(2, 1, -1, +½)||(2, 1, -1, -½)||(2, 1, 0, +½)||(2, 1, 0, -½)||(2, 1, 1, +½)||(2, 1, 1, -½)|
|Two s electrons:||(3, 0, 0, +½)||(3, 0, 0, -½)|
The Periodic Table
Notice a pattern on the periodic table. Different areas, or blocks, have different types of electrons. The two columns on the left make the s-block. The six columns on the right make the p-block. The large area in the middle (transition metals) makes the d-block. The bottom portion makes the f-block (Lanthanides and Actinides). Each row introduces a new shell (aka energy level). Basically, the row tells you how many shells of electrons there will be, and the column tells you which subshells will occur (and which shells they occur in). The value of ml can be determined by some of the rules we will learn in the next chapter. The value of ms doesn't really matter as long as there are no repeating values in the same orbital.
||To see the pattern better, take a look at this periodic table.|
Oxidation numbers are a way of keeping track of electrons and making sure that components of a compound match by the correct ratios.
- Pure elements have an oxidation number of zero.
- Ions, monoatomic or polyatomic have oxidation numbers equal to their charge.
- The sum of the oxidation numbers in covalent and ionic compounds must equal zero.
- Bonded Group 1 metals are +1, Group 2 are +2, and halogens are -1, unless bonded with oxygen.
- Bonded oxygen is -2 unless it is in a hydroxide (OH), where it is -1, or with fluorine, where it is positive.
- Bonded hydrogen is -1 when bonded with a metal and +1 when bonded with a nonmetal
(First element's name) (Second element's name + ide) e.g. Sodium Chloride.
Hydro(nonmetal+ic) acid. E.g. Hydrobromic acid (HBr)
(First element's name) (polyatomic ion's name) e.g. Sodium Hydroxide (NaOH). Note that there is an exception - the ammonium ion (NH4+) can replace the first element.
In the following order: (hydrogen)(a nonmetal)(oxygen)
If the ion ends in -ate, the acid will be named (nonmetal)ic acid. Example: H2SO4 contains a sulfate ion. It is called sulfuric acid.
If the ion ends in -ite, the acid is named (nonmetal)ous acid
Some elements, especially transition metals, can have many oxidation numbers. As a result, the positive oxidation number has to be written in, using Roman numerals. For example, CuO is Copper (II) oxide and Cu2O is Copper (I) oxide. | https://en.wikibooks.org/wiki/AP_Chemistry/The_Basics |
4.1875 | Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.
Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.
Rayleigh scattering describes the air's gas molecules scattering light as it enters the atmosphere; it also describes why the sky is blue.
Apply the Rayleigh scattering to explain common phenomena
Describe wave-particle relationship that leads to the Rayleigh scattering
The phenomenon of light scattering is called Rayleigh scattering; it can happen to any electromagnetic waves. It only occurs when waves encounter particles that are much smaller than the wavelengths of the waves.
The amount that light is scattered is inversely proportional to the fourth power of the light wavelength. For this reason, shorter-wavelength light like greens and blues scatter more easily than longer wavelengths like yellows and reds.
As you look closer to the sky's light source, the sun, the light is scattered less and less because the angle between the sun and the scattering particles approaches 90 degrees. This is why the sun has a yellowish color when we look at it from Earth while the rest of the sky appears blue.
In outer space, where there is no atmosphere and therefore no particles to scatter the light, the sky appears black and the sun appears white.
During a sunset, the light must pass through an increased volume of air. This increases the scattering effect, causing the light in the direct path of the observer to appear orange rather than blue.
Rayleigh scattering is the elastic scattering of waves by particles that are much smaller than the wavelengths of those waves. The particles that scatter the light also need to have a refractive index close to 1. This law applies to all electromagnetic radiation, but in this atom we are going to focus specifically on why the atmosphere scatters the visible spectrum of electromagnetic waves, also known as visible light. In this case, the light is scattered by the gas molecules of the atmosphere, and the refractive index of air is 1.
Rayleigh scattering is due to the polarizability of an individual molecule. This polarity describes how much the electric charges in the molecule will vibrate in an electric field. The formula to calculate the intensity of the scattering for a single particle is as follows:
where I is the resulting intensity, I0 is the original intensity, α is the polarizability, λ is the wavelength, R is the distance to the particle, and θ is the scattering angle.
While you will probably not need to use this formula, it is important to understand that scattering has a strong dependence on wavelength. From the formula, we can see that a shorter wavelength will be scattered more strongly than a longer one.( The longer the wavelength, the larger the denominator, and from algebra we know that a larger denominator in a fraction means a smaller number. )
Why is the Sky Blue?
As we just learned, light scattering is inversely proportional to the fourth power of the light wavelength. So, the shorter the wavelength, the more it will get scattered. Since green and blue have relatively short wavelengths, you see a mixture of these colors in the sky, and the sky appears to be blue. When you look closer and closer to the sun, the light is not being scattered because it is approaching a 90-degree angle with the scattering particles. Since the light is being scattered less and less, you see the longer wavelengths, like red and yellow. This is why the sun appears to be a light yellow color.
Why are Sunsets Colorful?
shows a sunset. We know why the sky is blue, but why are there all those colors in a sunset? The reddening that occurs near the horizon is because the light has to pass through a significantly higher volume of air than when the sun is high in the sky. This increases the Rayleigh scattering effect and removes all blue light from the direct path of the observer. The remaining unscattered light is of longer wavelengths and so appears orange.
inelastic scattering of waves by particles that are much larger than the wavelengths of those waves, elastic scattering of waves by particles that are much larger than the wavelengths of those waves, inelastic scattering of waves by particles that are much smaller than the wavelengths of those waves, or elastic scattering of waves by particles that are much smaller than the wavelengths of those waves
the light has to pass through a higher volume of air than when the sun is high in the sky, the angle between the sun and the scattering particles approaches 90 degrees, there is no atmosphere in the outer space, or the light has to pass through a smaller volume of air than when the sun is high in the sky
Source: Boundless. “Scattering of Light by the Atmosphere.” Boundless Physics. Boundless, 21 Jul. 2015. Retrieved 09 Feb. 2016 from https://www.boundless.com/physics/textbooks/boundless-physics-textbook/wave-optics-26/further-topics-176/scattering-of-light-by-the-atmosphere-644-1628/ | https://www.boundless.com/physics/textbooks/boundless-physics-textbook/wave-optics-26/further-topics-176/scattering-of-light-by-the-atmosphere-644-1628/ |
4.03125 | The long necked sauropod dinosaurs were the largest land animals ever to walk the Earth but why were they so large? A decade ago a team of plant ecologists from South Africa suggested that this was due to the nature of the plant food they ate, however these ideas have fallen out of favour with many dinosaur researchers. Now Liverpool John Moores University's (LJMU's) Dr David Wilkinson and Professor Graeme Ruxton of University of St Andrews, Scotland, argue that this idea still has legs.
The results have been published in the journal Functional Ecology published by the British Ecological Society this month. They suggest that this mistake happened because some scientists confused two different issues in thinking about this problem; namely how much energy is in the plant with how much nitrogen is in the plant the South African ideas were based on nitrogen content not the total energy in the plant food.
Dr Wilkinson and Professor Ruxton now argue that this South African idea about long necked sauropod dinosaurs being large based on the nature of the plant food they ate, is still a contender for explaining their size. As well as arguing that these ideas have been prematurely discarded the new work goes on the further develop this theory.
Dr David M Wilkinson who is an ecologist from the LJMU School of Natural Sciences and Psychology explains: "This new study makes a first attempt to calculate in more detail the implications of this idea. It suggests that it may have been to the advantage of young sauropods trying to get enough nitrogen to have a metabolism rather like modern mammals, but that this would have been impossible for the adults because of the danger of such large animals overheating from all the heat that such a metabolism would have produced."
"Alternatively - or in addition - it would also have been potentially beneficial for the young to be carnivorous, as this would also have helped them access more nitrogen. The large adults plausibly used their size to help process large amounts of plant food to access enough scarce nitrogen, as suggested in the original 2002 study. However this would potentially have caused them to have to take in more energy than they needed. A mammal (and possibly also small sauropods) would get rid of this surplus as heat, but this would not be possible for a really large dinosaur. Potentially they may have laid down fat reserves instead. So one can even speculate that they may have had humps of fat rather like modern-day camels."
|Contact: Clare Doran| | http://www.bio-medicine.org/biology-news-1/Was-the-sauropod-dinosaurs-large-size-due-to-plant-food-3F-Scientists-argue-old-idea-still-has-legs--27853-1/ |
4.1875 | This video is one of a series from the Switch Energy project. It reviews the environmental impacts of various energy resources including fossil fuels, nuclear, and renewables. CO2 emissions as a specific environmental impact are discussed.
In this activity, students will determine the environmental effects of existing cars and a fleet consisting of their dream cars. They compute how many tons of heat-trapping gases are produced each year, how much it costs to fuel the cars, and related information. Then, students research and prepare a report about greener transportation choices.
In this activity students trace the sources of their electricity, heating and cooling, and other components of their energy use though the use of their family's utility bills and information from utility and government websites.
This activity engages students in learning about ways to become energy efficient consumers. Students examine how different countries and regions around the world use energy over time, as reflected in night light levels. They then track their own energy use, identify ways to reduce their individual energy consumption, and explore how community choices impact the carbon footprint.
In this activity, students explore what types of energy resources exist in their state by examining a state map to identify the different energy sources in their state, including the state's renewable energy potential.
The activity follows a progression that examines the CO2 content of various gases, explores the changes in the atmospheric levels of CO2 from 1958 to 2000 from the Mauna Loa Keeling curve, and the relationship between CO2 and temperature over the past 160,000 years. This provides a foundation for examining individuals' input of CO2 to the atmosphere and how to reduce it.
This Energy Flow Charts website is a set of energy Sankey diagrams or flow charts for 136 countries constructed from data maintained by the International Energy Agency (IEA) and reflects the energy use patterns for 2007. | https://www.climate.gov/teaching/resources/search-subjects/usage-trends-8397?keywords= |
4 | An international team of researchers analysed the available data taken from all previous studies of the Southern Ocean, together with satellite images taken of the area, to quantify the amount of iron supplied to the surface waters of the Southern Ocean.
They found that deep winter mixing, a seasonal process which carries colder and deeper, nutrient-rich water to the surface, plays the most important role in transporting iron to the surface. The iron is then able to stimulate phytoplankton growth which supports the ocean's carbon cycle and the aquatic food chain
They were also able to determine that following the winter iron surge, a recycling process is necessary to support biological activity during the spring and summer seasons.
Oceanographer, Dr Alessandro Tagliabue, from the University's School of Environmental Sciences, said: "We combined all available iron data, matched them with physical data from autonomous profiling floats and used the latest satellite estimates of biological iron demand to explore how iron is supplied to the phytoplankton in the Southern Ocean.
"This is important because iron limits biological productivity and air to sea CO2 exchange in this region. We found unique aspects to the iron cycle and how it is supplied by physical processes, making it distinct to other nutrients.
"This means that the Southern Ocean's nutrient supply would be affected by changes to the climate system (such as winds and freshwater input) differently to other areas of the ocean.
"We need to understand these unique aspects so that they can be used to better inform global climate predictions."
Dr Jean-Baptiste Sallée, from the Centre National de la Recherche Scientifique and the British Antarctic Survey, said: “We are really excited to make this discovery because until now we didn’t know the physical processes allowing iron to reach the ocean surface and maintain biological activity. The combination of strong winds and intense heat loss in winter strongly mixes the ocean surface and the mixing reaches deep iron reservoir.”
The Southern Ocean comprises the southernmost waters of the world oceans that encircle Antarctica. Researchers have long known the region is crucial in the uptake of atmospheric CO2 and that biological processes in the Southern Ocean influence the global ocean system via northward flowing currents.
The research involved the British Antarctic Survey, Southern Ocean Carbon and Climate Observatory, Sorbonne Universites, CNRS, University of Tasmania, University of Cape Town, University of Otago, University of Tasmania.
It is published in Nature Geoscience.
Sarah Stamper | EurekAlert!
NASA sees development of Tropical Storm 11P in Southwestern Pacific
11.02.2016 | NASA/Goddard Space Flight Center
Southwest sliding into a drier climate
11.02.2016 | National Center for Atmospheric Research/University Corporation for Atmospheric Research
Today, plants and microorganisms are heavily used for the production of medicinal products. The production of biopharmaceuticals in plants, also referred to as “Molecular Pharming”, represents a continuously growing field of plant biotechnology. Preferred host organisms include yeast and crop plants, such as maize and potato – plants with high demands. With the help of a special algal strain, the research team of Prof. Ralph Bock at the Max Planck Institute of Molecular Plant Physiology in Potsdam strives to develop a more efficient and resource-saving system for the production of medicines and vaccines. They tested its practicality by synthesizing a component of a potential AIDS vaccine.
The use of plants and microorganisms to produce pharmaceuticals is nothing new. In 1982, bacteria were genetically modified to produce human insulin, a drug...
Atomic clock experts from the Physikalisch-Technische Bundesanstalt (PTB) are the first research group in the world to have built an optical single-ion clock which attains an accuracy which had only been predicted theoretically so far. Their optical ytterbium clock achieved a relative systematic measurement uncertainty of 3 E-18. The results have been published in the current issue of the scientific journal "Physical Review Letters".
Atomic clock experts from the Physikalisch-Technische Bundesanstalt (PTB) are the first research group in the world to have built an optical single-ion clock...
The University of Würzburg has two new space projects in the pipeline which are concerned with the observation of planets and autonomous fault correction aboard satellites. The German Federal Ministry of Economic Affairs and Energy funds the projects with around 1.6 million euros.
Detecting tornadoes that sweep across Mars. Discovering meteors that fall to Earth. Investigating strange lightning that flashes from Earth's atmosphere into...
Physicists from Saarland University and the ESPCI in Paris have shown how liquids on solid surfaces can be made to slide over the surface a bit like a bobsleigh on ice. The key is to apply a coating at the boundary between the liquid and the surface that induces the liquid to slip. This results in an increase in the average flow velocity of the liquid and its throughput. This was demonstrated by studying the behaviour of droplets on surfaces with different coatings as they evolved into the equilibrium state. The results could prove useful in optimizing industrial processes, such as the extrusion of plastics.
The study has been published in the respected academic journal PNAS (Proceedings of the National Academy of Sciences of the United States of America).
Exceeding critical temperature limits in the Southern Ocean may cause the collapse of ice sheets and a sharp rise in sea levels
A future warming of the Southern Ocean caused by rising greenhouse gas concentrations in the atmosphere may severely disrupt the stability of the West...
09.02.2016 | Event News
02.02.2016 | Event News
26.01.2016 | Event News
11.02.2016 | Life Sciences
11.02.2016 | Physics and Astronomy
11.02.2016 | Earth Sciences | http://www.innovations-report.com/html/reports/earth-sciences/southern-ocean-iron-cycle-gives-new-insight-into-climate-change.html |
4.125 | In a sensory system, a sensory receptor is a sensory nerve ending that responds to a stimulus in the internal or external environment of an organism. In response to stimuli, the sensory receptor initiates sensory transduction by creating graded potentials or action potentials in the same cell or in an adjacent one.
The sensory receptors involved in taste and smell contain receptor molecules that bind to specific chemicals. Odor receptors in olfactory receptor neurons, for example, are activated by interacting with molecular structures on the odor molecule. Similarly, taste receptors (gustatory receptors) in taste buds interact with chemicals in food to produce an action potential.
Other receptors such as mechanoreceptors and photoreceptors respond to physical stimuli. For example, photoreceptor cells contain specialized proteins such as rhodopsin to transduce the physical energy in light into electrical signals. Some types of mechanoreceptors fire action potentials when their membranes are physically stretched.
The sensory receptor functions are the first component in a sensory system.
Sensory receptors respond to specific stimulus modalities. The stimulus modality to which a sensory receptor responds is determined by the sensory receptor's adequate stimulus.
The sensory receptor responds to its stimulus modality by initiating sensory transduction. This may be accomplished by a net shift in the initial states of a receptor (see a picture of these putative states with the biophysical description ).
- Baroreceptors respond to pressure in blood vessels
- Chemoreceptors respond to chemical stimuli
- Electromagnetic radiation receptors respond to electromagnetic radiation
- Electroreceptors respond to electric fields
- Ampullae of Lorenzini respond to electric fields, salinity, and to temperature, but function primarily as electroreceptors
- Hydroreceptors respond to changes in humidity
- Magnetoreceptors respond to magnetic fields
- Mechanoreceptors respond to mechanical stress or mechanical strain
- Nociceptors respond to damage, or threat of damage, to body tissues, leading (often but not always) to pain perception
- Osmoreceptors respond to the osmolarity of fluids (such as in the hypothalamus)
- Proprioceptors provide the sense of position
- Thermoreceptors respond to temperature, either heat, cold or both
Sensory receptors can be classified by location:
- Cutaneous receptors are sensory receptors found in the dermis or epidermis.
- Muscle spindles contain mechanoreceptors that detect stretch in muscles.
Somatic sensory receptors near the surface of the skin can usually be divided into two groups based on morphology:
- Free nerve endings characterize the nociceptors and thermoreceptors and are called thus because the terminal branches of the neuron are unmyelinated and spread throughout the dermis and epidermis.
- Encapsulated receptors consist of the remaining types of cutaneous receptors. Encapsulation exists for specialized functioning.
Rate of adaptation
- A tonic receptor is a sensory receptor that adapts slowly to a stimulus and continues to produce action potentials over the duration of the stimulus. In this way it conveys information about the duration of the stimulus. Some tonic receptors are permanently active and indicate a background level. Examples of such tonic receptors are pain receptors, joint capsule, and muscle spindle.
- A phasic receptor is a sensory receptor that adapts rapidly to a stimulus. The response of the cell diminishes very quickly and then stops. It does not provide information on the duration of the stimulus; instead some of them convey information on rapid changes in stimulus intensity and rate. An example of a phasic receptor is the Pacinian corpuscle.
Different sensory receptors are innervated by different types of nerve fibers. Muscles and associated sensory receptors are innervated by type I and II sensory fibers, while cutaneous receptors are innervated by Aβ, Aδ and C fibers.
- Michael J. Gregory. "Sensory Systems". Clinton Community College. Retrieved 2013-06-06.
- mentor.lscf.ucsb.edu/course/fall/eemb157/lecture/Lectures%2016,%2017%2018.ppt[dead link]
- http://frank.mtsu.edu/~jshardo/bly2010/nervous/receptor.html Archived August 3, 2008 at the Wayback Machine
- Sensory Receptors at the US National Library of Medicine Medical Subject Headings (MeSH)
- The major classes of somatic sensory receptors | https://en.wikipedia.org/wiki/Sensory_receptor |
4.3125 | Guided reading is 'small-group reading instruction designed to provide differentiated teaching that supports students in developing reading proficiency'. The small group model allows children to be taught in a way that is intended to be more focused on their specific needs, accelerating their progress.
Guided reading is a method of teaching reading, common in England and Wales through the influence of the National Literacy Strategy (later superseded by the Primary National Strategy). It remains recommended practice in some authorities, this is despite discontinued hosting and support of the Primary National Strategy from the United Kingdom's Department for Education.
In the United States, Guided Reading is a key component to the Reading Workshop model of literacy instruction. Guided Reading sessions involve a teacher and a small group, ideally of two to four children although groups of five or six are not uncommon. The session would have a set of objectives to be taught during a session lasting approximately 20 minutes. While guided reading takes place with one group of children, the remaining children are engaged in quality independent or group literacy tasks, with the aim of allowing the teacher to focus the small group without interruption. Guided Reading is usually a daily activity in English and Welsh primary school classrooms and involves every child in a class over the course of a week. In the United States, Guided Reading can take place at both the primary and intermediate levels. Each Guided Reading group meets with the teacher several times throughout a given week. The children are typically grouped by academic ability, reading levels, or strategic/skill-based needs.
Although there are positive aspects to this type of reading instruction, there are also two main challenges that exist at every grade level. According to Irene Fountas and Gay Su Pinnell, "some students will work on very basic reading skills such as word analysis and comprehending simple texts" while other students may be working on more advanced reading skills and strategies with increasingly challenging texts. In addition, "all students need instructional support so they can expand their competence across a greater variety of increasingly challenging texts." (Fountas and Pinnell). Thus, it takes a lot of strong planning and organization from the part of the teacher in order to successfully implement Guided Reading so that it meets the needs of all learners.
Text selection is a critical component of the Reading Workshop; it must be purposeful and have the needs of the learners in mind. According to Fountas and Pinnell, as a teacher reads "a text in preparation for teaching, you decide what demands the text will make on the processing systems of the readers." Texts should not be chosen to simply teach a specific strategy. Rather, the texts should be of such high quality that students can apply a wide range of reading comprehension strategies throughout the reading. "One text offers many opportunities to learn; you must decide how to mediate the text to guide your students' learning experiences" (Fountas and Pinnell).
Steps for a Lesson
Before Reading: A Teacher will access background knowledge, build schema, set a purpose for reading, and preview the text with students. Typically a group will engage in a variety of pre-reading activities such as predicting, learning new vocabulary, and discussing various text features. If applicable, the group may also engage in completing a "picture walk." This activity involves scanning through the text to look at pictures and predicting how the story will go. The students will engage in a conversation about the story, raise questions, build expectations, and notice information in the text (Fountas and Pinnell).
During Reading: The students will read independently within the group. As students read, the teacher will monitor student decoding and comprehension. The teacher may ask students if something makes sense, encourage students to try something again, or prompt them to use a strategy. The teacher makes observational notes about the strategy use of individual readers and may also take a short running record of the child's reading. The students may read the whole text or a part of the text silently or softly for beginning readers (Fountas and Pinnell).
After Reading: Following the reading, the teacher will again check students' comprehension by talking about the story with the children. The teacher returns to the text for teaching opportunities such as finding evidence or discussing problem solving. The teacher also uses this time to assess the students' understanding of what they have read. The group will also discuss reading strategies they used during the reading. To extend the reading, students may participate in activities such as drama, writing, art, or more reading (Fountas and Pinnell).
Features commonly found in a 'Guided Reading' session
- Book Introduction
Adult with group. Prepare the children, providing support through reading the title, talking about the type of text, looking at the pictures and accessing previous knowledge. Aim to give them confidence without reading the book to them. If necessary, locate and preview difficult new words and unfamiliar concepts or names. A variety of books/genres can be used.
- Strategy Check
Adult with group. Introduce or review specific reading strategies that the children have been taught and remind them to use these when reading.
- Independent Reading
Individuals. Children read the book at their own pace. Monitor individuals and use appropriate prompts to encourage problem-solving. Praise correct use of reading strategies.
- Returning to the Text
Adult with group. Briefly talk about what has been read to check children's understanding. Praise correct use of reading strategies.
- Response to the Text
Adult with group. Encourage children to respond to the book either through a short discussion where they express opinions, or through providing follow-up activities.
- Re-reading Guided Text
"Individuals." Provide a 'familiar book' box for each group, containing texts recently used in Guided Reading. Children can re-read texts to themselves or with a partner as an independent activity to give them opportunities to develop fluency and expression and build up 'reading miles'. (taken from the following publication "Guided Reading" found at: www.standards.dfee.gov.uk/literacy)
There are three models of guided reading that can be used based upon the above structure depending upon the National Curriculum (NC) level that the group is reading at. The models do overlap.
This model is used for children who are reading up to about NC level 1A/2C. In this model the book introduction, strategy check, independent reading, return to text and response to text all take place generally within one session. This is aided by the fact that the books suitable for children reading at this stage are very short. (Baker, Bickler and Bodman)
This model is used for children who are reading at NC level 2C to 3C/B. Generally two guided sessions will be needed to read a book. The first session generally focuses on the book introduction, strategy check and independent reading. Whilst children are reading at their own pace, it is important to start to introduce an element of silent reading. This is to develop the skills of meaning making when reading independently. Because books at this stage are generally longer, it is not possible to read the whole book in one session. Once the children have done some reading in the session they can be asked to read the rest of the book prior to the second session. This session then focuses on returning to the text and responding to the text. These are the more able children and not those at level 1
Readers working at a NC level of 3B upwards will need the fluent model of guided reading. Here it is not necessary for children to read the text during the guided sessions. At these levels children can generally decode the words. What is important is that they discuss the meaning that they make from the text which will form the basis of the discussion. Therefore, the session tends to focus on return to text and response to the text with the strategy check implicit in the discussions.
- http://nationalstrategies.standards.dcsf.gov.uk/node/47536 for National Curriculum reading levels
- "Research base for guided reading as an instructional approach" (PDF). Scholastic. Retrieved 18 April 2012.
- "Guided Reading". The Lancashire Grid for Learning. Retrieved 18 April 2012.
- "The National Strategies - Schools". Department for Education. Retrieved 18 April 2012.
- Fountas, I. C. & Pinnell, G. S. (1996). Guided reading: Good first teaching for all children. Portsmouth: Heinemann.
- Baker, S., Bickler, S. & Bodman, S. (2007) Book Bands for Guided Reading: A handbook to support Foundation and Key Stage 1 teachers London: Institute of Education
- Swartz, Stanley L. (2003). Guided Reading and Literacy Centers. Parsipanny, NJ: Dominie Press/Pearson Learning Group. | https://en.wikipedia.org/wiki/Guided_reading |
4.1875 | NetWellness is a global, community service providing quality, unbiased health information from our partner university faculty. NetWellness is commercial-free and does not accept advertising.
Thursday, February 11, 2016
Every winter, millions of people will suffer from influenza, a highly contagious infection - more commonly known as the flu. Caused by a virus (germ) that infects the nose, throat, and lungs, "the flu bug" is spread from person to person mostly by the coughing and sneezing of infected persons. The period before symptoms start ranges from 1-4 days, with an average of 2 days.
Adults typically are infectious from the day before symptoms begin and can remain infectious as much as 5 days after illness starts. Children can spread viruses for around 10 days. Severely immunocompromised persons can shed virus for much longer periods of time.
To reduce your risk of influenza, follow these simple steps:
However, your best line of defense for remaining healthy during the flu season is a yearly flu vaccine. Vaccination can reduce your chance of contracting influenza, minimize your symptoms if infection does occur following the vaccine, and help prevent the spread of the infection to others.
Influenza viruses often change, making it hard to predict which strain will strike in a given year. In the spring of each year, health officials around the world work with laboratories to predict which one of the many different strains of influenza viruses will be circulating the following flu season. During this time, a new flu vaccine is developed to include 3 major current strains, including 2 A strains and 1 B strain. Influenza A and B are the two types of influenza viruses that cause epidemic human disease, also known as seasonal influenza. This vaccine is inactivated and cannot itself cause infection.
With a scheduled appointment, you can receive a flu shot at your doctor's office. Mild side-effects, including a headache or low-grade fever, may occur for a day following the vaccination. (Flu Clinic Locator)
FluMist is a new nasal spray flu vaccine that was licensed in 2003. Unlike flu shots, which are made from killed viruses, the nasal vaccine is made from live, but weakened viruses. Because of this, FluMist is advised only for healthy people aged 5 to 49. Persons in older age groups or those with immune system defects should receive the traditional inactivated vaccine.
In September, the flu shot (inactivated influenza vaccine) is offered to people at high risk. October through November is the best time to get vaccinated as flu activity typically starts in December. In the United States flu activity generally peaks between late December and early March. After November there is still benefit from getting the vaccine even if flu is present in your community.
Vaccine should continue to be offered to unvaccinated people throughout the flu season as long as vaccine is still available. Once you get vaccinated, your body makes protective antibodies in about two weeks. Because of the dwindling supplies of flu shots, the CDC recommends those with greatest need and highest risk be given the vaccination first.
The Centers for Disease Control and Prevention (CDC) in Atlanta cautions that the following people are at high risk of becoming seriously ill from influenza and should be vaccinated each year:
1. People at high risk for complications from the flu, including:
- Children aged 6 months until their 5th birthday,
- Pregnant women (if you are pregnant, confer with your physician about the vaccine),
- People 50 years of age and older, and
- People of any age with certain chronic medical conditions;
- People who live in nursing homes and other long term care facilities.
2. People who live with or care for those at high risk for complications from flu, including:
- Household contacts of persons at high risk for complications from the flu (see above)
- Household contacts and out of home caregivers of children less than 6 months of age (these children are too young to be vaccinated)
- Healthcare workers.
People with a severe allergy to eggs SHOULD NOT get the flu shot. Developed in medical labs, the viruses used in flu vaccines are grown in eggs. People who are allergic to eggs may experience a serious reaction to the vaccine.
People with asthma should avoid the nasal mist vaccine, and those who have developed an allergic reaction to previous vaccinations should avoid either vaccine type - shot or mist.
While many symptoms of influenza and colds are similar, influenza comes on suddenly, resulting in increasing weakness. Influenza is NOT stomach flu. Fatigue and a dry cough caused by influenza can last for weeks.
Can't tell if it is a cold or the flu? Visit the National Institute of Allergy and Infectious Diseases flu fact sheet.
Because an estimated 92 children ages 5 and younger die of influenza annually in the United States, it is important to recognize and treat symptoms in children quickly. Children and adolescents with a fever should not be given aspirin. Tips for Treating the Flu may help your child feel better in the meantime.
In addition to the symptoms of influenza listed for adults, children at times can have middle ear infection (otitis media), nausea, and vomiting. There are lots of viruses that cause respiratory illnesses, so you can't tell by symptoms alone if you have influenza or some other virus.
In children, some emergency warning signs that need urgent medical attention include:
In adults, some emergency warning signs that need urgent medical attention include:
Seek medical care immediately, either by calling your doctor or going to an emergency room, if you or someone you know is experiencing any of the signs described above or other unusually severe symptoms. When you arrive, notify the receptionist or nurse about your symptoms.
Influenza (the Flu): Questions & Answers (CDC)
Find the Flu Shot in Your Area (www.Flu.gov)
Focus on the Flu (National Institute of Allergy and Infectious Diseases)
Flu Information (US Food and Drug Administration)
Get Ready for Flu Blog (American Public Health Association)
National Immunization Information Hotline
Flu?Get the Shot (National Institute on Aging)
This article is a NetWellness exclusive.
Last Reviewed: Feb 20, 2007
Kurt B Stevenson, MD, MPH
Professor of Infectious Diseases
College of Medicine
The Ohio State University | http://www.netwellness.uc.edu/healthtopics/infectiousdisease/influenza.cfm |
4.125 | To fight Mexico, the United States had to mobilize, equip, and transport a large force, including
Grade Range: K-12
Resource Type(s): Artifacts, Primary Sources
Date Posted: 3/27/2012
In the early nineteenth century, lighthouses in the United States were considered inferior to those in France and England. American mariners complained about the quality of the light emanating from local lighthouse towers, arguing that European lighthouses were more effective at shining bright beams of light over long distances. While American lighthouses relied on lamps and mirrors to direct mariners, European lighthouses were equipped with compact lenses that could shine for miles.
In 1822, French scientist Augustin-Jean Fresnel was studying optics and light waves. He discovered that by arranging a series of lenses and prisms into the shape of a beehive, the strength of lighthouse beams could be improved. His lens—known as the Fresnel lens—diffused light into beams that could be visible for miles. Fresnel designed his lenses in several different sizes, or orders. The first order lens, meant for use in coastal lighthouses, was the largest and the strongest lens. The sixth order lens was the smallest, designed for use in small harbors and ports.
By the 1860s, all of the lighthouses in the United States were fitted with Fresnel lenses. This lens came from a lighthouse on Bolivar Point, near Galveston, Texas. Galveston was the largest and busiest port in nineteenth-century Texas. Having a lighthouse here was imperative – the mouth of the bay provided entry to Houston and Texas City, as well as inland waterways. The Bolivar Point Light Station had second and third order Fresnel lenses over the years; this third order lens was installed in 1907. Its light could be seen from 17 miles away.
On 16-17 August 1915, a severe hurricane hit Galveston. As the storm grew worse, fifty to sixty people took refuge in the Bolivar Point Light Station. Around 9:15 PM, the light’s turning mechanism broke, forcing assistant lighthouse keeper J.B. Brooks to turn the Fresnel lens by hand. By 10 PM, the vibrations from the hurricane were so violent that Brooks began to worry the lens might shatter. He ceased turning the lens, trimmed the lamp wicks and worked to maintain a steady light through the night. The next morning, Brooks left the lighthouse to find Bolivar Point nearly swept away by the water.
Bolivar Point Light Station used this Fresnel lens until 1933. It was donated to the Smithsonian Institution by the National Park Service.
United States History Standards (Grades 5-12)
2: How the industrial revolution, increasing immigration, the rapid expansion of slavery, and the westward movement changed the lives of Americans and led toward regional tensions
3: The extension, restriction, and reorganization of political democracy after 1800
4: The sources and character of cultural, religious, and social reform movements in the antebellum period | https://historyexplorer.si.edu/resource/fresnel-lighthouse-lens |
4.21875 | How to identify, define and label a midpoint.
How to find the midpoint of two points.
How to derive the formula for the midpoint of a segment in three dimensions.
How to derive the area of a segment formula.
How to find the midpoint of a segment with endpoints in rectangular coordinates, How to write the midpoint formula.
How to duplicate a line segment using a compass and straightedge.
How to find the length of tangent segments drawn to a circle from the same point.
How to define and label a line segment.
How to write the parametric equations of a line segment that goes from point A to point B.
How to define a median in a triangle.
How to define a midsegment in a triangle.
How to construct a midpoint and how to define a perpendicular bisector.
How to prove two triangles are similar using a line parallel to a base.
How to define the midsegment of a trapezoid, calculate its length, and relate it to a triangle midsegment.
How to find the shortest distance between a point and a line.
How to define a triangle midsegment and describe its special properties.
How to derive the equation for the distance between two points using the Pythagorean Theorem.
How to identify a segment from the vertex angle in an isosceles triangle to the opposite side. | https://www.brightstorm.com/tag/midpoint-of-a-segment/ |
4.0625 | Eclipses occur when the sun or moon, depending on the situation, is obscured. Sometimes these eclipses are total and sometimes they are partial.
Astronomers use many terms to describe what happens during an eclipse, and this glossary shows some of the more common things they say. This glossary is based upon descriptions from NASA and the Canadian Space Agency.
Annular eclipse: A "ring of fire" solar eclipse that happens when the moon's size (as seen from Earth) is not quite big enough to cover the entire sun. This happens when the moon is at apogee, or the closest approach to Earth in its orbit.
Contacts: The four points of contact that happen during an eclipse. First contact takes place when the partial phase of the eclipse begins. Second contact takes place when a total or annular phase begins. Third contact takes place when the total or annular phase ends. Fourth contact takes place when the partial phase ends.
Corona: The tenuous upper atmosphere of the sun that is usually only visible if you use special filters while observing the sun. During a solar eclipse, however, the corona is visible because the moon blots the sun's disc out from the perspective of Earth.
Eclipse magnitude: During a solar eclipse, this describes the fraction of the sun's diameter obscured by the moon at the moment of greatest eclipse.
Eclipse obscuration: During a solar eclipse, this describes the fraction of the sun's area obscured by the moon at the moment of greatest eclipse.
Eye safety: The only time that the sun can be viewed safely with the naked eye is during a total eclipse, when the moon completely covers the disk of the sun. It is never safe to look at a partial or annular eclipse, or the partial phases of a total solar eclipse, without the proper equipment and techniques. The safest wayto view a solar eclipse is to construct a pinhole camera or pinhole mirror. Acceptable filters include aluminized Mylar or shade 14 arc-welder’s glass. Unacceptable filters include sunglasses, old color film negatives, black-and-white film that contains no silver, photographic neutral-density filters and polarizing filters. [Video: Safely View an Eclipse]
Hybrid eclipse: When a solar eclipse appears as annular and total, depending on where you're standing along its track. "In most cases, hybrid eclipses begin as annular, transform into total, and then revert back to annular before the end of their track," NASA wrote. "In rare instances, a hybrid eclipse may begin annular and end total, or vice versa."
Lunar eclipse: When the moon passes into the Earth's shadow. The moon appears to turn red because light from Earth's atmosphere — showing all the sunrises and sunsets at once — reflects back on to the moon.
Partial eclipse: When the moon and the sun (solar eclipse), or the moon and the Earth (lunar eclipse), are not perfectly aligned and only part of the moon or sun is obscured during the eclipse.
Penumbra: The lighter, outer part of a shadow seen during an eclipse. Sometimes an eclipse takes place with the penumbra only, meaning that the eclipse is partial only.
Saros cycle: A cycle describing how often and when solar and lunar eclipses happen. The cycle lasts about 18 years and describes the relative geometry of the moon, sun and Earth with regard to eclipses. It was first known by ancient Babylonian astronomers.
Solar eclipse: When the moon passes between the sun and Earth, blotting out some or all of the light from the sun.
Totality: The zone from Earth where a lunar or solar eclipse is visible. In general, lunar eclipses are visible anywhere where the sky is dark and clear. Solar eclipses are only visible in a very small area on the Earth. Total solar eclipses can last as long as 7 minutes and 32 seconds.
Umbra: The central, darker part of a shadow seen during an eclipse. If the umbra covers the moon or sun from Earth's perspective, a total eclipse occurs.
Learn more about eclipses
- Solar Eclipse: What Is a Total Eclipse and When Is the Next One?
- Total Solar Eclipses: How Often Do They Occur (and Why?)
- Lunar Eclipses: What Are They and When Is the Next One?
- Solar Eclipses of 2014: Complete Coverage
- Total Solar Eclipse Images
- Amazing Solar Eclipse Photos
- Lunar Eclipse Pictures
- More Lunar Eclipse Pictures
- Still More Lunar Eclipse Pictures
- And Even More Lunar Eclipse Pictures | http://www.space.com/25747-eclipse-glossary.html |
4.125 | THE oldest compelling fossil evidence for cellular life has been discovered on a 3.43-billion-year-old beach in western Australia. Its grains of sand provided a home for cells that dined on sulphur in a largely oxygen-free world.
The rounded, elongated and hollow tubular cells – probably bacteria – were found to have clumped together, formed chains and coated sand grains. Similar sulphur-processing bacteria are alive today, forming stagnant black layers beneath the surface of sandy beaches.
The remarkably well-preserved three-dimensional microbes could help resolve a fierce and long-running debate about what is the oldest known fossil – or at least add to it. The current record is held by fossils that are 35 million years older than the present find in nearby deposits known as the Apex chert. But in 2002 a team led by Martin Brasier of the University of Oxford showed that the bacteria-like shapes could have formed in a mineral process that had nothing to do with life.
Brasier has now found slightly younger rocks that he claims do, this time, show convincing signs of cells. He and his team offer multiple strands of evidence, including the physical structure of the putative microfossils and the geology and chemistry of the rocks, to prove that the forms they have discovered are biological structures. They appear to have cell walls that are consistent with bacterial life, and to have clustered and even split like modern bacteria (Nature Geoscience, DOI: 10.1038/ngeo1238).
“The structure of the would-be microfossils and the chemistry of the rock suggest they are bacteria”
The fossils were excavated from an ancient beach – now a sandstone formation near Strelley pool in the Pilbara region – by Brasier’s colleague David Wacey from the University of Western Australia in Crawley.
“Nobody had looked at fossil beach deposits because it was thought oxygen had caused the decay of all traces of life there,” says Brasier. “In fact, there was minimal oxygen in the atmosphere at this time, meaning that the fossils could preserve well.”
So is it just a matter of time before another, older group of cells is found? Not necessarily. Fossils older than 3.5 billion years are unlikely, as sedimentary rocks from that time are exceptionally rare and likely to have metamorphosed beyond recognition.
More on these topics: | https://www.newscientist.com/article/mg21128275-200-oldest-fossils-show-early-life-was-a-beach/ |
4.21875 | Virginia in the American Civil War
|Commonwealth of Virginia|
|Largest city||Virginia Beach|
|Admission to Confederacy||May 7, 1861 (8th)|
* 1,102,312 free
* 490,887 slave
|Forces supplied||155,000 Total
|Casualties||c. 30,000 dead|
|Senators||William Ballard Preston
Allen T. Caperton
Robert M. T. Hunter
|Restored to the Union||January 26, 1870|
American Civil War
The Commonwealth of Virginia was a prominent part of the Confederate States of America during the American Civil War. A slave-holding state, Virginia convened a state convention to deal with the secession crisis opened on February 13, 1861, only a week after seven seceding states had formed the Confederacy. The convention deliberated for several weeks, with Unionist delegates dominant, and defeated a motion to secede on April 4. Opinion shifted after April 15, when U.S. President Abraham Lincoln called for troops from all states still in the Union in response to the Confederate capture of Fort Sumter. On April 17, the Virginia convention voted to declare secession from the Union, pending ratification of the decision by the voters.
With the entry of Virginia into the Confederacy, a decision was made in May to move the Confederate capital from Montgomery, Alabama, to Richmond, in part because the defense of Virginia's capital was deemed strategically vital to the Confederacy's survival regardless of its political status. Virginia ratified the articles of secession on May 23. The following day, the U.S. Army moved into northern Virginia and captured Alexandria without a fight.
Most of the battles in the Eastern Theater of the American Civil War took place in Virginia because the Confederacy had to defend its national capital at Richmond, and public opinion in the North demanded that the Union move "On to Richmond!" The successes of Robert E. Lee in defending Richmond is a central theme of the military history of the war. The White House of the Confederacy, located a few blocks north of the State Capitol, was home to the family of Confederate leader Jefferson Davis.
- 1 Prewar tensions
- 2 Secession
- 3 Strategic significance
- 4 Virginia during the war
- 5 West Virginia splits
- 6 Virginians in the Civil War
- 7 Legacy
- 8 See also
- 9 Notes
- 10 References
- 11 External links
On October 16, 1859, the radical abolitionist John Brown led a group of 22 men in a raid on the Federal Arsenal in Harpers Ferry, Virginia. U.S. troops, led by Robert E. Lee, responded and quelled the raid. Subsequently, John Brown was tried and executed by hanging in Charles Town on December 2, 1859.
In 1860 the Democratic Party split into northern and southern factions over the issue of slavery in the territories and Stephen Douglas' support for popular sovereignty: after failing in both Charleston and Baltimore to nominate a single candidate acceptable to the South, Southern Democrats held their convention in Richmond, Virginia on June 26, 1860 and nominated John C. Breckinridge as their party candidate for President.
When Republican Abraham Lincoln was elected as U.S. president, Virginians were concerned about the implications for their state. While a majority of the state would look for compromises to the sectional differences, most people also opposed any restrictions on slaveholders' rights. As the state watched to see what South Carolina would do, many Unionists felt that the greatest danger to the state came not from the North but from "rash secession" by the lower South.
Call for secession convention
On November 15, 1860 Virginia Governor John Letcher called for a special session of the Virginia General Assembly to consider, among other issues, the creation of a secession convention. The legislature convened on January 7 and approved the convention on January 14. On January 19 the General Assembly called for a national Peace Conference, led by Virginia's former President of the United States, John Tyler, to be held in Washington on February 4, the same date that elections were scheduled for delegates to the secession convention.
The election of convention delegates drew 145,700 voters who elected, by county, 152 representatives. Thirty of these delegates were secessionists, thirty were unionists, and ninety-two were moderates who were not clearly identified with either of the first two groups. Nevertheless, advocates of immediate secession were clearly outnumbered. Simultaneous to this election, six Southern slave states formed the Confederate States of America on February 4.
According to one Virginian, William M. Thompson, the declaring of secession by the slave states was necessary to preserve slavery as well as prevent marriages between freedmen and the white "daughters of the South", saying that civil war would be more preferable:
The convention met on February 13 at the Richmond Mechanics Institute located at Ninth and Main Street in Richmond. One of the convention's first actions was to create a 21-member Federal Relations Committee charged with reaching a compromise to the sectional differences as they affected Virginia. The committee was made up of 4 secessionists, 10 moderates and 7 unionists. At first there was no urgency to the convention's deliberations as all sides felt that time only aided their cause. In addition, there were hopes that the Peace Conference of 1861 on January 19, led by Virginia's former President of the United States, John Tyler, might resolve the crisis by, in historian Edward L. Ayers's words, "guaranteeing the safety of slavery forever and the right to expand slavery in the territories below the Missouri Compromise line." With the failure of the Peace Conference at the end of February, moderates in the convention began to waver in their support for unionism. Unionist support by many was further eroded for many Virginians by Lincoln's March 4 First Inaugural address which they felt was "argumentative, if not defiant." Throughout the state there was evidence that support for secession was growing.
At the Virginian secession convention in February 1861, Georgian Henry Lewis Benning, who would later go on to join the Confederate army as an officer, delivered a speech in which gave his reasoning for the urging of secession from the Union, appealing to ethnic prejudices and pro-slavery sentiments to present his case, saying that were the slave states to remain in the Union, their slaves would ultimately end up being freed by the anti-slavery Republican Party. He stated that he would rather be stricken with illness and starvation than to see African Americans liberated from slavery and be given equality as citizens:
What was the reason that induced Georgia to take the step of secession? This reason may be summed up in one single proposition. It was a conviction, a deep conviction on the part of Georgia, that a separation from the North-was the only thing that could prevent the abolition of her slavery. ... If things are allowed to go on as they are, it is certain that slavery is to be abolished. By the time the north shall have attained the power, the black race will be in a large majority, and then we will have black governors, black legislatures, black juries, black everything. Is it to be supposed that the white race will stand for that? It is not a supposable case ... war will break out everywhere like hidden fire from the earth, and it is probable that the white race, being superior in every respect, may push the other back. ... we will be overpowered and our men will be compelled to wander like vagabonds all over the earth; and as for our women, the horrors of their state we cannot contemplate in imagination. That is the fate which abolition will bring upon the white race. ... We will be completely exterminated, and the land will be left in the possession of the blacks, and then it will go back to a wilderness and become another Africa... Suppose they elevated Charles Sumner to the presidency? Suppose they elevated Fred Douglass, your escaped slave, to the presidency? What would be your position in such an event? I say give me pestilence and famine sooner than that.
The Federal Relations Committee made its report to the convention on March 9. The fourteen proposals defended both slavery and states' rights while calling for a meeting of the eight slave states still in the Union to present a united front for compromise. From March 15 through April 14 the convention debated these proposals one by one. During the debate on the resolutions, the sixth resolution calling for a peaceful solution and maintenance of the Union came up for discussion on April 4. Lewis Edwin Harvie of Amelia County offered a substitute resolution calling for immediate secession. This was voted down by 88 to 45 and the next day the convention continued its debate. Approval of the last proposal came on April 12. The goal of the unionist faction after this approval was to adjourn the convention until October, allowing time for both the convention of the slave states and Virginia's congressional elections in May which, they hoped, would produce a stronger mandate for compromise.
One delegate reiterated the state's cause of secession and the purpose of the convention:
Sir, the great question which is now uprooting this Government to its foundation – the great question which underlies all our deliberations here, is the question of African slavery.— Thomas F. Goode, speech to the Virginia Secession Convention, (March 28, 1861).
Ultimately, the convention declared that slavery should continue, and that it should be extended into U.S. territories:
Proposals Adopted by the Virginia Convention of 1861
The first resolution asserted states' rights per se; the second was for retention of slavery; the third opposed sectional parties; the fourth called for equal recognition of slavery in both territories and non-slave states; the fifth demanded the removal of federal forts and troops from seceded states; the sixth hoped for a peaceable adjustment of grievances and maintaining the Union; the seventh called for Constitutional amendments to remedy federal and state disputes; the eighth recognized the right of secession; the ninth said the federal government had no authority over seceded states since it refused to recognize their withdrawal; the tenth said the federal government was empowered to recognize the Confederate States; the eleventh was an appeal to Virginia's sister states; the twelfth asserted Virginia's willingness to wait a reasonable period of time for an answer to its propositions, providing no one resorted to force against the seceded states; the thirteenth asked United States and Confederate States governments to remain peaceful; and the fourteenth asked the border slave states to meet in conference to consider Virginia's resolutions and to join in Virginia's appeal to the North.
At the same time, Unionists were concerned about the continued presence of U.S. forces at Fort Sumter despite assurances communicated informally to them by U.S. Secretary of State William Seward that it would be abandoned. Lincoln and Seward were also concerned that the Virginia convention was still in session as of the first of April while secession sentiment was growing. At Lincoln's invitation, unionist John B. Baldwin of Augusta County, met with Lincoln on April 4. Baldwin explained that the unionists needed the evacuation of Fort Sumter, a national convention to debate the sectional differences, and a commitment by Lincoln to support constitutional protections for southern rights. Over Lincoln's skepticism, Baldwin argued that Virginia would be out of the Union within forty-eight hours if either side fired a shot at the fort. By some accounts, Lincoln offered to evacuate Fort Sumter if the Virginia convention would adjourn.
On April 6, amid rumors that the North was preparing for war, the convention voted by a narrow 63-57 to send a three-man delegation to Washington to determine from Lincoln what his intentions were. However, due to bad weather the delegation did not arrive in Washington until April 12. They learned of the attack on Fort Sumter from Lincoln, and the President advised them of his intent to hold the fort and respond to force with force. Reading from a prepared text to prevent any misinterpretations of his intent, Lincoln told them that he had made it clear in his inaugural address that the forts and arsenals in the South were government property and "if ... an unprovoked assault has been made upon Fort Sumter, I shall hold myself at liberty to re-possess, if I can, like places which have been seized before the Government was devolved upon me."
The pro-Union sentiment in Virginia was further weakened after the April 12 Confederate attack upon Fort Sumter. Richmond reacted with large public demonstrations in support of the Confederacy on April 13 when it first received the news of the attack. A Richmond newspaper described the scene in Richmond on the 13th:
- "Saturday night the offices of the Dispatch, Enquirer and Examiner, the banking house of Enders, Sutton & Co., the Edgemont House, and sundry other public and private places, testified to the general joy by brilliant illuminations.
- Hardly less than ten thousand persons were on Main street, between 8th and 14th, at one time. Speeches were delivered at the Spottswood House, at the Dispatch corner, in front of the Enquirer office, at the Exchange Hotel, and other places. Bonfires were lighted at nearly every corner of every principal street in the city, and the light of beacon fires could be seen burning on Union and Church Hills. The effect of the illumination was grand and imposing. The triumph of truth and justice over wrong and attempted insult was never more heartily appreciated by a spontaneous uprising of the people. Soon the Southern wind will sweep away with the resistless force of a tornado, all vestige of sympathy or desire of co-operation with a tyrant who, under false pretences, in the name of a once glorious, but now broken and destroyed Union, attempts to rivet on us the chains of a despicable and ignoble vassalage. Virginia is moving." The convention reconvened on April 13 to reconsider Virginia's position, given the outbreak of hostilities. With Virginia still in a delicate balance, with no firm determination yet to secede, sentiment turned more strongly toward secession on April 15, following President Abraham Lincoln's call to all states that had not declared a secession, including Virginia, for troops to assist in halting the insurrection and recovering the captured forts.
War Department, Washington, April 15, 1861. To His Excellency the Governor of Virginia: Sir: Under the act of Congress for calling forth "militia to execute the laws of the Union, suppress insurrections, repel invasions, etc.," approved February 28, 1795, I have the honor to request your Excellency to cause to be immediately detached from the militia of your State the quota designated in the table below, to serve as infantry or rifleman for the period of three months, unless sooner discharged. Your Excellency will please communicate to me the time, at or about, which your quota will be expected at its rendezvous, as it will be met as soon as practicable by an officer to muster it into the service and pay of the United States.— Simon Cameron, Secretary of War.
The quota for Virginia attached called for three regiments of 2,340 men to rendezvous at Staunton, Wheeling and Gordonsville. Governor Letcher and the recently reconvened Virginia Secession Convention considered this request from Lincoln "for troops to invade and coerce" lacking in constitutional authority, and out of scope of the Act of 1795. Governor Letcher's "reply to that call wrought an immediate change in the current of public opinion in Virginia"., whereupon he issued the following reply:
Executive Department, Richmond, Va., April 15, 1861. Hon. Simon Cameron, Secretary of War: Sir: I have received your telegram of the 15th, the genuineness of which I doubted. Since that time I have received your communications mailed the same day, in which I am requested to detach from the militia of the State of Virginia "the quota assigned in a table," which you append, "to serve as infantry or rifleman for the period of three months, unless sooner discharged." In reply to this communication, I have only to say that the militia of Virginia will not be furnished to the powers at Washington for any such use or purpose as they have in view. Your object is to subjugate the Southern States, and a requisition made upon me for such an object - an object, in my judgment, not within the purview of the Constitution or the act of 1795 - will not be complied with. You have chosen to inaugurate civil war, and, having done so, we will meet it in a spirit as determined as the administration has exhibited toward the South.— Respectfully, John Letcher
Thereafter, the secession convention voted on April 17, provisionally, to secede, on the condition of ratification by a statewide referendum. That same day, the convention adopted an ordinance of secession, in which it stated the immediate cause of Virginia's declaring of secession, "the oppression of the Southern slave-holding States".
E.L. Ayers, who felt that "even Fort Sumter might have passed, however, had Lincoln not called for the arming of volunteers", wrote of the convention's final decision:
The decision came from what seemed to many white Virginians the unavoidable logic of the situation: Virginia was a slave state; the Republicans had announced their intention of limiting slavery; slavery was protected by the sovereignty of the state; an attack on that sovereignty by military force was an assault on the freedom of property and political representation that sovereignty embodied. When the federal government protected the freedom and future of slavery by recognizing the sovereignty of the states, Virginia's Unionists could tolerate the insult the Republicans represented; when the federal government rejected that sovereignty, the threat could no longer be denied even by those who loved the Union.
The Governor of Virginia immediately began mobilizing the Virginia State Militia to strategic points around the state. Former Governor Henry Wise had arranged with militia officers on April 16, before the final vote, to seize the United States arsenal at Harpers Ferry and the Gosport Navy Yard in Norfolk. On April 17 in the debate over secession Wise announced to the convention that these events were already in motion. On April 18 the arsenal was captured and most of the machinery was moved to Richmond. At Gosport, the Union Navy, believing that several thousand militia were headed their way, evacuated and abandoned Norfolk, Virginia and the navy yard, burning and torching as many of the ships and facilities as possible.
Colonel Robert E. Lee resigned his U.S. Army commission, turning down an offer of command for the American army. He would ultimately join the Confederate army instead.
The referendum was a perfunctory endorsement of the Governor Letcher's decision to join the Confederacy and was not a free and fair election. The Confederate Congress proclaimed Richmond to be new capital of the Confederacy and Confederate troops moved into northern Virginia before the referendum was held. The actual number of votes for or against secession are unknown since votes in many counties in northwestern and eastern Virginia (where most of Virginia's unionists lived) were "discarded or lost." Governor Letcher "estimated" the vote for these areas. Many unionists feared retaliation if they voted against secession because it wasn't a secret ballot and Virginia's pro-confederate government would have a record of their votes. Unionists who did attempt to vote were threatened with violence and even death on some occasions. Voting in Virginia was restricted to white males 21 years of age and above.
The reaction to the referendum was swift on both sides. Confederate troops shut down the Baltimore and Ohio Railroad, one of Washington City's two rail links to Ohio and points west. The next day, the U.S. Army moved into northern Virginia. With both armies now in northern Virginia, the stage was set for war. In June, Virginian unionists met at the Wheeling Convention to set up the Restored Government of Virginia. Francis Pierpont was elected governor. The restored government raised troops to defend the Union and appointed two Senators to the United States Senate. It resided in Wheeling until August 1863 when it moved to Alexandria with West Virginia's admittance to the Union. During the summer of 1861, parts of the northern, western and eastern Virginia, including the Baltimore and Ohio railroad, were returned to Union control. Norfolk returned to union control in May 1862. These areas would be administered by the Restored Government of Virginia, with the northwestern counties later becoming the new state of West Virginia. In April 1865, Francis Pierpont and the Restored Government of Virginia moved to Richmond.
Virginia's strategic resources played a key role in dictating the objectives of the war there. Its agricultural and industrial capacity, and the means of transporting this production, were major strategic targets for attack by Union forces and defense by Confederate forces throughout the war.
The Confederate need for war materiel played a very significant role in its decision to move its capital from Montgomery, Alabama to Richmond in May 1861, despite its dangerous northern location 100 miles south of the United States capital in Washington, DC. It was mainly for this industrial reason that the Confederates fought so hard to defend the city. The capital of the Confederacy could easily be moved again if necessary, but Richmond's industry and factories could not be moved.
Richmond was the only large-scale industrial city controlled by the Confederacy during most of the Civil War. The city's warehouses were the supply and logistical center for Confederate forces. The city's Tredegar Iron Works, the 3rd largest foundry in the United States at the start of the war, produced most of the Confederate artillery, including a number of giant rail-mounted siege cannons. The company also manufactured railroad locomotives, boxcars and rails, as well as steam propulsion plants and iron plating for warships. Richmond's factories also produced guns, bullets, tents, uniforms, harnesses, leather goods, swords, bayonets, and other war materiel. A number of textile plants, flour mills, brick factories, newspapers and book publishers were located in Richmond. Richmond had shipyards too, although they were smaller than the shipyards controlled by the Union in Norfolk, Virginia.
The city's loss to the Union army in April 1865 made a Union victory in the Civil War inevitable. With Virginia firmly under Union control, including the industrial centers of Richmond, Petersburg and Norfolk, the mostly rural and agricultural deep south lacked the industry needed to supply the Confederate war effort.
At the outbreak of the war Petersburg, Virginia was second only to Richmond among Virginia cities in terms of population and industrialization. The juncture of five railroads, it provided the only continuous rail link to the Deep South. Located 20 miles (32 km) south of Richmond, its defense was a top priority; the day that Petersburg fell, Richmond fell with it.
In the western portion of the state (as defined today), the Shenandoah Valley was considered the "Breadbasket of the Confederacy". The valley was connected to Richmond via the Virginia Central Railroad and the James River and Kanawha Canal.
The Blue Ridge mountains and similar sites had long been mined for iron, and (though as the war progressed, shortages in manpower limited their production). In southwest Virginia, the large salt works at Saltville provided a key source of salt to the Confederacy, essential in preserving food for use by the army. It was the target of two battles.
Virginia during the war
The first and last significant battles of the war were held in Virginia, the first being the First Battle of Bull Run and the last being the Battle of Appomattox Courthouse. From May 1861 to April 1865, Richmond was the capital of the Confederacy. The White House of the Confederacy, located a few blocks north of the State Capital, was home to the family of Confederate President Jefferson Davis.
The first major battle of the Civil War occurred on July 21, 1861. Union forces attempted to take control of the railroad junction at Manassas for use as a supply line, but the Confederate Army had moved its forces by train to meet the Union. The Confederates won the First Battle of Bull Run (known as "First Battle of Manassas" in southern naming convention) and the year went on without a major fight.
Union general George B. McClellan was forced to retreat from Richmond by Robert E. Lee's army. Union general Pope was defeated at the Second Battle of Manassas. Following the one-sided Confederate victory Battle of Fredericksburg.
When fighting resumed in the spring of 1863, Union general Hooker was defeated at Chancellorsville by Lee's army.
Ulysses Grant's Overland Campaign was fought in Virginia. The campaign included battles of attrition at the Wilderness, Spotsylvania and Cold Harbor and ended with the Siege of Petersburg and Confederate defeat.
In September 1864, the Southern Punch, a newspaper based in Richmond, reiterated the Confederacy's cause:
In April 1865, a fire set in Richmond by the retreating Confederate Army burned 25 percent of the city before being put out by the Union army. It was the Union Army that saved the city from widespread conflagration and ruin. As a result, Richmond emerged from the Civil War as an economic powerhouse, with most of its buildings and factories undamaged.
West Virginia splits
The western counties could not tolerate the Confederacy; they formed a pro Union state government of Virginia in 1861 (recognized by Washington), then with its permission formed the new state of West Virginia in 1863.
At the Richmond secession convention on April 17, 1861, the delegates from western counties were 17 in favor and 30 against secession. From May to August 1861, a series of Unionist conventions met in Wheeling; the Second Wheeling Convention constituted itself as a legislative body called the Restored Government of Virginia. It declared Virginia was still in the Union but that the state offices were vacant and elected a new governor, Francis H. Pierpont, this body gained formal recognition by the Lincoln administration on July 4, but Congress did not seat its elected representatives. On August 20 the Wheeling body passed an ordinance for the creation; it was put to public vote on Oct. 24. The vote was in favor of a new state—West Virginia—which was distinct from the Pierpont government, which persisted until the end of the war. Congress and Lincoln approved, and, after providing for gradual emancipation of slaves in the new state constitution, West Virginia became the 35th state on June 20, 1863.
During the War, West Virginia contributed about 32,000 soldiers to the Union Army and about 10,000 to the Confederate cause. Richmond of course did not recognize the new state, and Confederates did not vote there. Everyone realized the decision would be made on the battlefield, and Richmond sent in Robert E. Lee. But Lee found little local support and was defeated by Union forces from Ohio. Union victories in 1861 drove the Confederate forces out of the Monongahela and Kanawha valleys, and throughout the remainder of the war the Union held the region west of the Alleghenies and controlled the Baltimore and Ohio Railroad in the north.
Virginians in the Civil War
Virginia's Confederate government fielded about 150,000 troops in the American Civil War. They came from all economic and social levels, including some Unionists and former Unionists. However, at least 30,000 of these men were actually from other states. Most of these non-Virginians were from Maryland, whose government was controlled by Unionists during the war. Another 20,000 of these troops were from what would become the State of West Virginia in August 1863. Important Confederates from Virginia included General Robert E. Lee, commander of the Army of Northern Virginia, General Stonewall Jackson and General J.E.B. Stuart.
Roughly 50,000 Virginians served in the Union military, including West Virginians and roughly 6,000 Virginians of African ancestry. Some of these men served in Maryland units. Some African Americans, both freedmen and runaway slaves, enlisted in states as far away as Massachusetts. Areas of Virginia that supplied Union soldiers and sent few or no men to fight for the Confederacy had few slaves, a high percentage of poor families, and a history of opposition to secession. These areas were located near northern states and were often under Union control. 40% of Virginia's officers in the United States military when the war started stayed and fought for the Union. These men included Winfield Scott, General-in-Chief of the Union Army, David G. Farragut, First Admiral of the Union Navy, and General George Henry Thomas.
At least one Virginian actually served in both the Confederate and Union armies. At the beginning of the war, a Confederate soldier from Fairfax County approached the Union soldiers guarding Chain Bridge in his Confederate uniform. Asked what he was doing trying to cross the bridge, he responded that he was travelling to Washington, D.C. to see his uncle. The perplexed Union soldiers asked who his uncle was and the soldier replied his name is Uncle Sam. He was quickly enlisted as a Union scout due to his knowledge of the local terrain.
Notable Civil War leaders (Confederate) from Virginia
Robert E. Lee
Thomas J. Jackson
John S. Mosby
Joseph E. Johnston
A. P. Hill
Richard S. Ewell
Jubal A. Early
Lewis A. Armistead
Brig. Gen. (frmr. Gov.)
John B. Floyd
Commr. to U.K. & France
James Murray Mason
Robert M. T. Hunter
Notable Civil War leaders (Union) from Virginia
David G. Farragut
Samuel Phillips Lee
George Henry Thomas
John Newton (engineer)
John Davidson (general)
Philip St. George Cooke
William R. Terrill
Alexander Brydie Dyer
William Hays (general)
Waitman T. Willey
John S. Carlile
Lemuel J. Bowden
Abolitionist and Richmond Spy Ring Leader
Elizabeth Van Lew
Numerous battlefields and sites have been partially or fully preserved in Virginia. Those managed by the Federal government include Manassas National Battlefield Park, Richmond National Battlefield Park, Fredericksburg and Spotsylvania National Military Park, Cedar Creek and Belle Grove National Historical Park, Petersburg National Battlefield, Appomattox Court House National Historical Park.
Virginia Places in the American Civil War
Civil War Battles in Virginia (chronologically)
- McPherson pp. 213-216
- Link p. 217. Link wrote, "Although a majority probably favored compromise, most opposed any weakening of slaveholders' protections. Even so-called moderates -- mostly Whigs and Douglas Democrats – opposed the sacrifice of these rights and they rejected ant acquiescence or 'submission' to federal coercion. ... To a growing body of Virginians, Lincoln's election meant the onset of an active war against southern institutions. These men shared a common fear of northern Republicans and a common suspicion of a northern conspiracy against the South."
- Ayers p. 86
- Link p. 224
- Robertson p. 3-4. Robertson, clarifying the position of the moderates, wrote, "However, the term 'unionist' had an altogether different meaning in Virginia at the time. Richmond delegates Marmaduke Johnson and William McFarland were both outspoken conservatives. Yet in their respective campaigns, each declared that he was in favor of separation from the Union if the federal government did not guarantee protection of slavery everywhere. Moreover, the threat of the federal government's using coercion became an overriding factor in the debates that followed."
- Thompson, William M. (February 2, 1861). "Letter to Warner A. Thompson". Virginia. Retrieved September 6, 2015.
- McPherson, James M. For Cause and Comrades: Why Men Fought in the Civil War. p. 19. Retrieved September 6, 2015.
- Link p. 227
- Robertson p. 5
- Ayers pp. 120-123
- Potter pp. 545-546. Nevins pp. 411-412. The conferences recommendations, which differed little from the Crittenden Compromise, were defeated in the Senate by a 28 to 7 vote and were never voted on by the House.
- Robertson p. 8. Robert E. Scott of Fauquier County noted that this failure and the North's apparent indifference to southern concerns "extinguished all hope of a settlement by the direct action of those States, and I at once accepted the dissolution of the existing Union ... as a necessity."
- Robertson p. 8. Robertson quotes an observer of the speech saying, "Mr. Lincoln raised his voice and distinctly emphasized the declaration that he must take, hold, possess, and occupy the property (e.g. slaves) and places [in the South] belonging to the United States. This was unmistakable, and he paused for a moment after closing the sentence as if to allow it to be fully taken in and comprehended by his audience."
- Robertson p. 9. Robertson writes, "Although some leaders such as Governor Letcher still believed that 'patience and prudence' would 'work out the results,' a growing, uncontrollable attitude for war was sweeping through the state. Militia units were organizing from the mountains to the Tidewater. Newspapers in Richmond and elsewhere maintained a steady heat, noisy partisans filled the convention galleries, and at night large crowds surged through the capital streets 'with bands of music and called out their favorite orators at the different hotels.'"
- Rhea, Gordon (January 25, 2011). "Why Non-Slaveholding Southerners Fought". Civil War Trust. Civil War Trust. Retrieved March 21, 2011.
- Benning, Henry L. (February 18, 1861). "Speech of Henry Benning to the Virginia Convention". Proceedings of the Virginia State Convention of 1861. pp. 62–75. Retrieved March 17, 2015.
- Robertson p. 13. The committee report represented the moderate/unionist position; the vote in committee was 12 in favor, 2 against, with 7 abstaining.
- Riggs p. 268
- Robertson p. 15
- Link p. 235
- Goode, Thomas F. "Virginia Secession Convention". Mecklenburg County, Virginia. p. 518. Retrieved September 8, 2015.
- Riggs p. 264. Riggs made his summary based on Proceedings of the Virginia State Convention of 1861, Volume 1, pp. 701-716
- Potter p. 355
- Klein p. 381-382. Ayers (p. 125) notes that Baldwin had said that "there is but one single subject of complaint which Virginia has to make against the government under which we live; a complaint made by the whole South, and that is the subject of African slavery.
- Klein p. 381-382. Baldwin denied receiving the offer to evacuate Fort Sumter, but the next day Lincoln told another Virginia unionist, John Minor Botts, that the offer had been made. In any event, the offer was never presented to the convention.
- Robertson p. 14-15. Furgurson p. 29-30.
- McPherson p. 278.
- Furgurson p. 32.
- (Richmond Daily Dispatch April 15, 1861)
- "On This Day: Legislative Moments in Virginia History". Virginia Historical Society.
- "Lincoln Call for Troops".(page includes TWO documents)
- Clement A. Evans, Confederate Military History- Volume III - Virginia, pt. 1, p. 38
- Virginia Secession Convention (April 17, 1861). "Virginia Ordinance of Secession". Virginia: Virginia Secession Convention. Retrieved March 19, 2015.
- Ayers p. 140
- Ayers p. 141
- McPherson p. 279-280
- Virginia Historical Society
- West Virginia Division of Culture and History
- Library of Virginia
- Encyclopedia Virginia
- "The New Heresy". Southern Punch. Richmond: John Wilford Overall. September 19, 1864. Retrieved September 8, 2015.
- Coski, John M. (2005). The Confederate Battle Flag: America's Most Embattled Emblem. Retrieved July 2, 2015.
- https://books.google.com/books?id=g7ABIgo_yDgC&pg=PP1&ots=wJAnYXykKy&dq=civil+war+history&sig=7KIo6o_w7hTI5hY3HK1_d_FJ7YM#PPP9,M1 The Civil War: A History
- In the statewide vote on May 23, 1861 on secession, the 50 counties of the future West Virginia voted 34,677 to 19,121 to remain in the Union. Richard O. Curry, A House Divided, Statehood Politics & the Copperhead Movement in West Virginia, (1964), pp. 141-147.
- Curry, A House Divided, pg. 73.
- Curry, A House Divided, pgs. 141-152.
- After statehood was achieved the counties of Jefferson and Berkeley were annexed to the new state late in 1863. Charles H. Ambler and Festus P. Summers, West Virginia: The Mountain State ch 15-20.
- Otis K. Rice, West Virginia: A History (1985) ch 12-14
- Aaron Sheehan-Dean, "Everyman's War: Confederate Enlistment in Civil War Virginia," Civil War History, March 2004, Vol. 50 Issue 1, pp 5-26
- Pryor, Elizabeth Brown (2011-04-19). "The General in His Study". Disunion. The New York Times. Retrieved April 19, 2011.
- Ambler, Charles, A History of West Virginia, Prentice-Hall, 1933.
- Ayers, Edward L. In the Presence of Mine Enemies: The Civil War in the Heart of America 1859-1863. (2003) ISBN 0-393-32601-2.
- Blair, William. Virginia's Private War: Feeding Body and Soul in the Confederacy, 1861-1865 (1998) online edition
- Crofts, Daniel W. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989)
- Curry, Richard Orr, A House Divided: A Study of Statehood Politics and the Copperhead Movement in West Virginia (1964).
- Davis, William C. and James I. Robertson Jr., eds. Virginia at War, 1865 (vol 5; University Press of Kentucky; 2011) 237 pages; Virginia at War, 1864 (2009); Virginia at War, 1863 (2008); Virginia at War, 1862 (2007); Virginia at War, 1861 (2005)
- Furgurson, Ernest B. Ashes of Glory: Richmond at War. (1996) ISBN 0-679-42232-3
- Kerr-Ritchie, Jeffrey R. Freedpeople in the Tobacco South: Virginia, 1860-1900 (1999)
- Klein, Maury. Days of Defiance: Sumter, Secession, and the Coming of the Civil War. (1997) ISBN 0-679-44747-4.
- Lebsock, Suzanne D. "A Share of Honor": Virginia Women, 1600-1945 (1984)
- Lewis, Virgil A. and Comstock, Jim, History and Government of West Virginia, 1973.
- Link, William A. Roots of Secession: Slavery and Politics in Antebellum Virginia. (2003) ISBN 0-8078-2771-1.
- McPherson, James M. Battle Cry of Freedom. (1988) ISBN 0-345-35942-9.
- Noe, Kenneth W. Southwest Virginia's Railroad: Modernization and the Sectional Crisis (1994)
- Potter, David M. Lincoln and His Party in the Secession Crisis. (1942) ISBN 0-8071-2027-8.
- Randall, J. G. and David Donald, Civil War and Reconstruction, (1966).
- Riggs, David F. "Robert Young Conrad and the Ordeal of Secession."The Virginia Magazine of History and Biography, Vol. 86, No. 3 (July 1978), pp. 259–274.
- Robertson, James I. Jr. "The Virginia State Convention" in Virginia at War 1861. editors Davis, William C. and Robertson, James I. Jr. (2005) ISBN 0-8131-2372-0.
- Robertson, James I. Civil War Virginia: Battleground for a Nation, University of Virginia Press, Charlottesville, Virginia 1993 ISBN 0-8139-1457-4; 197 pages excerpt and text search
- Shanks, Henry T. The Secession Movement in Virginia, 1847-1861 (1934) online edition
- Sheehan-Dean, Aaron Charles. Why Confederates fought: family and nation in Civil War Virginia? (2007) 291 pages excerpt and text search
- Simpson, Craig M. A Good Southerner: The Life of Henry A. Wise of Virginia (1985), wide-ranging political history
- Turner, Charles W. "The Virginia Central Railroad at War, 1861-1865," Journal of Southern History (1946) 12#4 pp. 510–533 in JSTOR
- Wills, Brian Steel. The war hits home: the Civil War in southeastern Virginia? (2001) 345 pages; excerpt and text search
|Wikiquote has quotations related to: American Civil War|
|Wikimedia Commons has media related to Virginia in the American Civil War.|
- Union or Secession: Virginians Decide at the Library of Virginia
- Virginia Convention of 1861 in Encyclopedia Virginia
- Guerilla Warfare in Virginia During the Civil War in Encyclopedia Virginia
- Free Blacks During the Civil War in Encyclopedia Virginia
- Refugees During the Civil War in Encyclopedia Virginia
- Poverty and Poor Relief During the Civil War in Encyclopedia Virginia
- Speculation During the Civil War in Encyclopedia Virginia
- Weather During the Civil War in Encyclopedia Virginia
- Confederate Impressment During the Civil War in Encyclopedia Virginia
- Religion During the Civil War in Encyclopedia Virginia
- Twenty-Slave Law in Encyclopedia Virginia
- National Park Service map of Civil War sites in Virginia: 1861-62
- National Park Service map of Civil War sites in Virginia: 1863
- National Park Service map of Civil War sites in Virginia: 1864
- National Park Service map of Civil War sites in Virginia: 1865 | https://en.wikipedia.org/wiki/Virginia_in_the_American_Civil_War |
4.09375 | the Public Broadcasting Service
the Show of Force Productions
This video-based resource examines factors that affect the amplitude and period of a pendulum. It provides a highly visual way to explore pendulum motion as a trapeze artist swings on a bar/rope system. Watch what happens to the pendulum period as her center of mass changes when she sits on the bar or moves to the rope below. The accompanying activity guide introduces the mathematics associated with pendulum motion. This resource includes a teacher's guide with tips on how to incorporate the video into instruction, discussion questions, and accompanying classroom activities.
This resource was developed in conjunction with the PBS series Circus. See Related Materials for a link to the full set of 8 Circus Physics video-based lessons.
Metadata instance created
November 19, 2013
by Caroline Hall
November 19, 2013
by Caroline Hall
AAAS Benchmark Alignments (2008 Version)
2. The Nature of Mathematics
2B. Mathematics, Science, and Technology
9-12: 2B/H3. Mathematics provides a precise language to describe objects and events and the relationships among them. In addition, mathematics provides tools for solving problems, analyzing data, and making logical arguments.
4. The Physical Setting
9-12: 4F/H4. Whenever one thing exerts a force on another, an equal amount of force is exerted back on it.
9-12: 4F/H8. Any object maintains a constant speed and direction of motion unless an unbalanced outside force acts on it.
Next Generation Science Standards
Motion and Stability: Forces and Interactions (HS-PS2)
Students who demonstrate understanding can: (9-12)
Analyze data to support the claim that Newton's second law of motion describes the mathematical relationship among the net force on a macroscopic object, its mass, and its acceleration. (HS-PS2-1)
Disciplinary Core Ideas (K-12)
Forces and Motion (PS2.A)
Newton's second law accurately predicts changes in the motion of macroscopic objects. (9-12)
Types of Interactions (PS2.B)
The gravitational force of Earth acting on an object near Earth's surface pulls that object toward the planet's center. (5)
Relationship Between Energy and Forces (PS3.C)
When two objects interact, each one exerts a force on the other that can cause energy to be transferred to or from the object. (6-8)
Common Core State Standards for Mathematics Alignments
High School — Algebra (9-12)
Seeing Structure in Expressions (9-12)
A-SSE.1.a Interpret parts of an expression, such as terms, factors, and coefficients.
Creating Equations? (9-12)
A-CED.1 Create equations and inequalities in one variable and use them to solve problems. Include equations arising from linear and quadratic functions, and simple rational and exponential functions.
A-CED.4 Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations.
Show of Force Productions. Circus Physics: Pendulum Motion. Arlington: Public Broadcasting Service, 2010. http://www.pbs.org/opb/circus/classroom/circus-physics/pendulum-motion/ (accessed 9 February 2016).
%0 Electronic Source %D 2010 %T Circus Physics: Pendulum Motion %I Public Broadcasting Service %V 2016 %N 9 February 2016 %9 application/flash %U http://www.pbs.org/opb/circus/classroom/circus-physics/pendulum-motion/
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. | http://www.compadre.org/Precollege/items/detail.cfm?ID=13044 |
4.125 | |This article needs additional citations for verification. (January 2013)|
In microeconomic theory, the opportunity cost of a choice is the value of the best alternative forgone, where a choice needs to be made between several mutually exclusive alternatives given limited resources. Assuming the best choice is made, it is the "cost" incurred by not enjoying the benefit that would be had by taking the second best choice available. The New Oxford American Dictionary defines it as "the loss of potential gain from other alternatives when one alternative is chosen". Opportunity cost is a key concept in economics, and has been described as expressing "the basic relationship between scarcity and choice". The notion of opportunity cost plays a crucial part in ensuring that scarce resources are used efficiently. Thus, opportunity costs are not restricted to monetary or financial costs: the real cost of output forgone, lost time, pleasure or any other benefit that provides utility should also be considered opportunity costs.
The term was coined in 1914 by Austrian economist Friedrich von Wieser in his book Theorie der gesellschaftlichen Wirtschaft. The idea had been anticipated by previous writers including Benjamin Franklin and Frédéric Bastiat. Franklin coined the phrase "Time is Money", and spelt out the associated opportunity cost reasoning in his “Advice to a Young Tradesman” (1746): “Remember that Time is Money. He that can earn Ten Shillings a Day by his Labour, and goes abroad, or sits idle one half of that Day, tho’ he spends but Sixpence during his Diversion or Idleness, ought not to reckon That the only Expence; he has really spent or rather thrown away Five Shillings besides.”
Opportunity costs in production
Explicit costs are opportunity costs that involve direct monetary payment by producers. The explicit opportunity cost of the factors of production not already owned by a producer is the price that the producer has to pay for them. For instance, if a firm spends $100 on electrical power consumed, its explicit opportunity cost is $100. This cash expenditure represents a lost opportunity to purchase something else with the $100.
Implicit costs (also called implied, imputed or notional costs) are the opportunity costs not reflected in cash outflow but implied by the failure of the firm to allocate its existing (owned) resources, or factors of production to the best alternative use. For example: a manufacturer has previously purchased 1000 tons of steel and the machinery to produce a widget. The implicit part of the opportunity cost of producing the widget is the revenue lost by not selling the steel and not renting out the machinery instead of using them for production.
One example of opportunity cost is in the evaluation of "foreign" (to the USA) buyers and their allocation of cash assets in real estate or other types of investment vehicles. With the recent downturn (circa June/July 2015) of the Chinese stock market, more and more Chinese investors from Hong Kong and Taiwan are turning to the United States as an alternative vessel for their investment dollars; the opportunity cost of leaving their money in the Chinese stock market or Chinese real estate market is too high relative to yields available in the USA real estate market
Note that opportunity cost is not the sum of the available alternatives when those alternatives are, in turn, mutually exclusive to each other – it is the next best alternative given up selecting the best option. The opportunity cost of a city's decision to build the hospital on its vacant land is the loss of the land for a sporting center, or the inability to use the land for a parking lot, or the money which could have been made from selling the land. Use for any one of those purposes would preclude the possibility to implement any of the other.
- Suppose you have a free ticket to a concert by Band X. The ticket has no resale value. On the night of the concert your next-best alternative entertainment is a performance by Band Y for which the tickets cost $40. You like Band Y and would usually be willing to pay $50 for a ticket to see them. What is the opportunity cost of using your free ticket and seeing Band X instead of Band Y?
- The benefit you forgo (that is, the value to you) is the benefit of seeing Band Y. As well as the gross benefit of $50 for seeing Band Y, you also forgo the actual $40 of cost, so the net benefit you forgo is $10. So, the opportunity cost of seeing Band X is $10.
- Budget constraint
- Economic value added
- Fear of missing out
- Opportunity cost of capital
- Parable of the broken window
- Production-possibility frontier
- There Ain't No Such Thing As A Free Lunch
- Time management
- Best alternative to a negotiated agreement
- "Opportunity Cost". Investopedia. Retrieved 2010-09-18.
- James M. Buchanan (2008). "Opportunity cost". The New Palgrave Dictionary of Economics Online (Second ed.). Retrieved 2010-09-18.
- "Opportunity Cost". Economics A-Z. The Economist. Retrieved 2010-09-18.
- Friedrich von Wieser (1927). A. Ford Hinrichs (translator), ed. Social Economics (PDF). New York: Adelphi. Retrieved 2011-10-07.
• Friedrich von Wieser (November 1914). Theorie der gesellschaftlichen Wirtschaft [Theory of Social Economics] (in German). Original publication.
- Explicit vs. Implicit Cost
- Gittins, Ross (19 April 2014). "At the coal face economists are struggling to measure up". The Sydney Morning Herald. Retrieved 23 April 2014.
- Henderson, David R. (2008). "Opportunity Cost". Concise Encyclopedia of Economics (2nd ed.). Indianapolis: Library of Economics and Liberty. ISBN 978-0865976658. OCLC 237794267.
- Roberts, Russell (February 5, 2007). "Getting the Most Out of Life: The Concept of Opportunity Cost". Library of Economics and Liberty.
|Wikiquote has quotations related to: Opportunity cost| | https://en.wikipedia.org/wiki/Opportunity_cost |
4.15625 | In fluid dynamics, a vortex is a region in a fluid in which the flow is rotating around an axis line, which may be straight or curved. The plural of vortex is either vortices or vortexes. Vortices form in stirred fluids, and may be observed in phenomena such as smoke rings, whirlpools in the wake of boat, or the winds surrounding a tornado or dust devil.
Vortices are a major component of turbulent flow. The distribution of velocity, vorticity (the curl of the flow velocity), as well as the concept of circulation are used to characterize vortices. In most vortices, the fluid flow velocity is greatest next to its axis and decreases in inverse proportion to the distance from the axis.
In the absence of external forces, viscous friction within the fluid tends to organize the flow into a collection of irrotational vortices, possibly superimposed to larger-scale flows, including larger-scale vortices. Once formed, vortices can move, stretch, twist, and interact in complex ways. A moving vortex carries with it some angular and linear momentum, energy, and mass.
A key concept in the dynamics of vortices is the vorticity, a vector that describes the local rotary motion at a point in the fluid, as would be perceived by an observer that moves along with it. Conceptually, the vorticity could be observed by placing a tiny rough ball at the point in question, free to move with the fluid, and observing how it rotates about its center. The direction of the vorticity vector is defined to be the direction of the axis of rotation of this imaginary ball (according to the right-hand rule) while its length is twice the ball's angular velocity. Mathematically, the vorticity is defined as the curl (or rotational) of the velocity field of the fluid, usually denoted by and expressed by the vector analysis formula , where is the nabla operator and is the local flow velocity.
The local rotation measured by the vorticity must not be confused with the angular velocity vector of that portion of the fluid with respect to the external environment or to any fixed axis. In a vortex, in particular, may be opposite to the mean angular velocity vector of the fluid relative to the vortex's axis.
In theory, the speed u of the particles (and, therefore, the vorticity) in a vortex may vary with the distance r from the axis in many ways. There are two important special cases, however:
- If the fluid rotates like a rigid body – that is, if the angular rotational velocity Ω is uniform, so that u increases proportionally to the distance r from the axis – a tiny ball carried by the flow would also rotate about its center as if it were part of that rigid body. In such a flow, the vorticity is the same everywhere: its direction is parallel to the rotation axis, and its magnitude is equal to twice the uniform angular velocity Ω of the fluid around the center of rotation.
- If the particle speed u is inversely proportional to the distance r from the axis, then the imaginary test ball would not rotate over itself; it would maintain the same orientation while moving in a circle around the vortex axis. In this case the vorticity is zero at any point not on that axis, and the flow is said to be irrotational.
In the absence of external forces, a vortex usually evolves fairly quickly toward the irrotational flow pattern, where the flow velocity u is inversely proportional to the distance r. For that reason, irrotational vortices are also called free vortices.
For an irrotational vortex, the circulation is zero along any closed contour that does not enclose the vortex axis and has a fixed value, , for any contour that does enclose the axis once. The tangential component of the particle velocity is then . The angular momentum per unit mass relative to the vortex axis is therefore constant, .
However, the ideal irrotational vortex flow is not physically realizable, since it would imply that the particle speed (and hence the force needed to keep particles in their circular paths) would grow without bound as one approaches the vortex axis. Indeed, in real vortices there is always a core region surrounding the axis where the particle velocity stops increasing and then decreases to zero as r goes to zero. Within that region, the flow is no longer irrotational: the vorticity becomes non-zero, with direction roughly parallel to the vortex axis. The Rankine vortex is a model that assumes a rigid-body rotational flow where r is less than a fixed distance r0, and irrotational flow outside that core regions. The Lamb-Oseen vortex model is an exact solution of the Navier–Stokes equations governing fluid flows and assumes cylindrical symmetry, for which
In an irrotational vortex, fluid moves at different speed in adjacent streamlines, so there is friction and therefore energy loss throughout the vortex, especially near the core.
A rotational vortex – one which has non-zero vorticity away from the core – can be maintained indefinitely in that state only through the application of some extra force, that is not generated by the fluid motion itself.
For example, if a water bucket is spun at constant angular speed w about its vertical axis, the water will eventually rotate in rigid-body fashion. The particles will then move along circles, with velocity u equal to wr. In that case, the free surface of the water will assume a parabolic shape.
In this situation, the rigid rotating enclosure provides an extra force, namely an extra pressure gradient in the water, directed inwards, that prevents evolution of the rigid-body flow to the irrotational state.
In a stationary vortex, the typical streamline (a line that is everywhere tangent to the flow velocity vector) is a closed loop surrounding the axis; and each vortex line (a line that is everywhere tangent to the vorticity vector) is roughly parallel to the axis. A surface that is everywhere tangent to both flow velocity and vorticity is called a vortex tube. In general, vortex tubes are nested around the axis of rotation. The axis itself is one of the vortex lines, a limiting case of a vortex tube with zero diameter.
According to Helmholtz's theorems, a vortex line cannot start or end in the fluid – except momentarily, in non-steady flow, while the vortex is forming or dissipating. In general, vortex lines (in particular, the axis line) are either closed loops or end at the boundary of the fluid. A whirlpool is an example of the latter, namely a vortex in a body of water whose axis ends at the free surface. A vortex tube whose vortex lines are all closed will be a closed torus-like surface.
A newly created vortex will promptly extend and bend so as to eliminate any open-ended vortex lines. For example, when an airplane engine is started, a vortex usually forms ahead of each propeller, or the turbofan of each jet engine. One end of the vortex line is attached to the engine, while the other end usually stretches outs and bends until it reaches the ground.
When vortices are made visible by smoke or ink trails, they may seem to have spiral pathlines or streamlines. However, this appearance is often an illusion and the fluid particles are moving in closed paths. The spiral streaks that are taken to be streamlines are in fact clouds of the marker fluid that originally spanned several vortex tubes and were stretched into spiral shapes by the non-uniform flow velocity distribution.
Pressure in a vortex
The fluid motion in a vortex creates a dynamic pressure (in addition to any hydrostatic pressure) that is lowest in the core region, closest to the axis, and increases as one moves away from it, in accordance with Bernoulli's Principle. One can say that it is the gradient of this pressure that forces the fluid to follow a curved path around the axis.
In a rigid-body vortex flow of a fluid with constant density, the dynamic pressure is proportional to the square of the distance r from the axis. In a constant gravity field, the free surface of the liquid, if present, is a concave paraboloid.
In an irrotational vortex flow with constant fluid density and cylindrical symmetry, the dynamic pressure varies as P∞ − K/r2, where P∞ is the limiting pressure infinitely far from the axis. This formula provides another constraint for the extent of the core, since the pressure cannot be negative. The free surface (if present) dips sharply near the axis line, with depth inversely proportional to r2.
The core of a vortex in air is sometimes visible because of a plume of water vapor caused by condensation in the low pressure and low temperature of the core; the spout of a tornado is an example. When a vortex line ends at a boundary surface, the reduced pressure may also draw matter from that surface into the core. For example, a dust devil is a column of dust picked up by the core of an air vortex attached to the ground. A vortex that ends at the free surface of a body of water (like the whirlpool that often forms over a bathtub drain) may draw a column of air down the core. The forward vortex extending from a jet engine of a parked airplane can suck water and small stones into the core and then into the engine.
Vortices need not be steady-state features; they can move and change shape. In a moving vortex, the particle paths are not closed, but are open, loopy curves like helices and cycloids. A vortex flow might also be combined with a radial or axial flow pattern. In that case the streamlines and pathlines are not closed curves but spirals or helices, respectively. This is the case in tornadoes and in drain whirlpools. A vortex with helical streamlines is said to be solenoidal.
As long as the effects of viscosity and diffusion are negligible, the fluid in a moving vortex is carried along with it. In particular, the fluid in the core (and matter trapped by it) tends to remain in the core as the vortex moves about. This is a consequence of Helmholtz's second theorem. Thus vortices (unlike surface and pressure waves) can transport mass, energy and momentum over considerable distances compared to their size, with surprisingly little dispersion. This effect is demonstrated by smoke rings and exploited in vortex ring toys and guns.
Two or more vortices that are approximately parallel and circulating in the same direction will attract and eventually merge to form a single vortex, whose circulation will equal the sum of the circulations of the constituent vortices. For example, an airplane wing that is developing lift will create a sheet of small vortices at its trailing edge. These small vortices merge to form a single wingtip vortex, less than one wing chord downstream of that edge. This phenomenon also occurs with other active airfoils, such as propeller blades. On the other hand, two parallel vortices with opposite circulations (such as the two wingtip vortices of an airplane) tend to remain separate.
Vortices contain substantial energy in the circular motion of the fluid. In an ideal fluid this energy can never be dissipated and the vortex would persist forever. However, real fluids exhibit viscosity and this dissipates energy very slowly from the core of the vortex. It is only through dissipation of a vortex due to viscosity that a vortex line can end in the fluid, rather than at the boundary of the fluid.
When the particle velocities are constrained to be parallel to a fixed plane, one can ignore the space dimension perpendicular to that plane, and model the flow as a two-dimensional flow velocity field on that plane. Then the vorticity vector is always perpendicular to that plane, and can be treated as a scalar. This assumption is sometimes made in meteorology, when studying large-scale phenomena like hurricanes.
The behavior of vortices in such contexts is qualitatively different in many ways; for example, it does not allow the stretching of vortices that is often seen in three dimensions.
- In the hydrodynamic interpretation of the behaviour of electromagnetic fields, the acceleration of electric fluid in a particular direction creates a positive vortex of magnetic fluid. This in turn creates around itself a corresponding negative vortex of electric fluid. Exact solutions to classical nonlinear magnetic equations include the Landau-Lifshitz equation, the continuum Heisenberg model, the Ishimori equation, and the nonlinear Schrödinger equation.
- Bubble rings are underwater vortex rings whose core traps a ring of bubbles, or a single donut-shaped bubble. They are sometimes created by dolphins and whales.
- The lifting force of aircraft wings, propeller blades, sails, and other airfoils can be explained by the creation of a vortex superimposed on the flow of air past the wing.
- Aerodynamic drag can be explained in large part by the formation of vortices in the surrounding fluid that carry away energy from the moving body.
- Large whirlpools can be produced by ocean tides in certain straits or bays. Examples are Charybdis of classical mythology in the Straits of Messina, Italy; the Naruto whirlpools of Nankaido, Japan; and the Maelstrom at Lofoten, Norway.
- Vortices in the Earth's atmosphere are important phenomena for meteorology. They include mesocyclones on the scale of a few miles, tornados, waterspouts, and hurricanes. These vortices are often driven by temperature and humidity variations with altitude. The sense of rotation of hurricanes is influenced by the Earth's rotation. Another example is the Polar vortex, a persistent, large-scale cyclone centered near the Earth's poles, in the middle and upper troposphere and the stratosphere.
- Vortices are prominent features of the atmospheres of other planets. They include the permanent Great Red Spot on Jupiter and the intermittent Great Dark Spot on Neptune, as well as the Martian dust devils and the North Polar Hexagon of Saturn.
- Sunspots are dark regions on the Sun's visible surface (photosphere) marked by a lower temperature than its surroundings, and intense magnetic activity.
- The accretion disks of black holes and other massive gravitational sources.
- Artificial gravity
- Batchelor vortex
- Biot–Savart law
- Coordinate rotation
- Cyclonic separation
- Helmholtz's theorems
- History of fluid mechanics
- Horseshoe vortex
- Kelvin–Helmholtz instability
- Quantum vortex
- Shower-curtain effect
- Strouhal number
- Vile Vortices
- Von Kármán vortex street
- Vortex engine
- Vortex tube
- Vortex cooler
- VORTEX projects
- Vortex shedding
- Vortex stretching
- Vortex induced vibration
- Ting, L. (1991). Viscous Vortical Flows. Lecture notes in physics. Springer-Verlag. ISBN 3-540-53713-9.
- Kida, Shigeo (2001). Life, Structure, and Dynamical Role of Vortical Motion in Turbulence (PDF). IUTAMim Symposium on Tubes, Sheets and Singularities in Fluid Dynamics. Zakopane, Poland.
- "vortex". Oxford Dictionaries Online (ODO). Oxford University Press. Retrieved 2015-08-29.
- "vortex". Merriam-Webster Online. Merriam-Webster, Inc. Retrieved 2015-08-29.
- Vallis, Geoffrey (1999). Geostrophic Turbulence: The Macroturbulence of the Atmosphere and Ocean Lecture Notes (PDF). Lecture notes. Princeton University. p. 1. Retrieved 2012-09-26.
- Clancy 1975, sub-section 7.5
- Loper, David E. (November 1966). An analysis of confined magnetohydrodynamic vortex flows (PDF) (NASA contractor report NASA CR-646). Washington: National Aeronautics and Space Administration. LCCN 67060315.
- Batchelor, G.K. (1967). An Introduction to Fluid Dynamics. Cambridge Univ. Press. Ch. 7 et seq. ISBN 9780521098175.
- Falkovich, G. (2011). Fluid Mechanics, a short course for physicists. Cambridge University Press. ISBN 978-1-107-00575-4.
- Clancy, L.J. (1975). Aerodynamics. London: Pitman Publishing Limited. ISBN 0-273-01120-0.
- De La Fuente Marcos, C.; Barge, P. (2001). "The effect of long-lived vortical circulation on the dynamics of dust particles in the mid-plane of a protoplanetary disc". Monthly Notices of the Royal Astronomical Society 323 (3): 601–614. Bibcode:2001MNRAS.323..601D. doi:10.1046/j.1365-8711.2001.04228.x.
|Wikimedia Commons has media related to Vortex.|
- Optical Vortices
- Video of two water vortex rings colliding (MPEG)
- Chapter 3 Rotational Flows: Circulation and Turbulence
- Vortical Flow Research Lab (MIT) – Study of flows found in nature and part of the Department of Ocean Engineering. | https://en.wikipedia.org/wiki/Vortices |
4.28125 | An ethical dilemma occurs when two or more specific ethical ideals are at odds and you must make a decision, founded on your logical assessment, about which ethical ideal is more important. Ethical dilemmas allow you to investigate ethical questions from an analytical point of view and make a final determination for yourself. In a paper format, this process is the same and you must make sure that each point is clear and logical.
Outline the specific ethical dilemma to identify the focus of your topic. If your topic does not contain a specific ethical dilemma, create one for yourself to use as an example through your writing. Make sure that your dilemma is ethical in nature by ensuring that it challenges two separate ethical assertions and forces you to decide between the two. For instance, you may decide to use the ethical dilemma of a hungry person deciding to steal food for himself and his family.
Determine which ethical standards are being challenged and in what way they are being challenged, to outline your ethical dilemma. List all significant elements of your ethical dilemma. As an example, the question of whether or not to steal for survival challenges the ethical forbearance against theft contrasted with the need for survival. You can consider other ethical elements, such as the difference between the loss of food by the store owner against the loss of life by the thief and his family.
Create your paper outline and arrange the elements of your ethical dilemma into individual sections. Make sure that your paper’s logic flows freely through your paper in order to allow you to arrive at a conclusion. For instance, your first section may question the ethical forbearance against theft and whether it is absolute. Your second section may question the right to survival and whether it is absolute. Your final section may compare the difference in severity between the shopkeeper who loses some food and the family that could perish for lack of it. This could become complex because the shopkeeper could end up in the same situation if enough of his stock is stolen.
Write your paper with a strong introduction that grabs your reader’s attention and establishes your paper’s thesis. Write each section individually, clearly examining your points throughout, and include clear transition statements between each of your sections.
Create a conclusion that brings all of your points together and establishes a final assessment of your ethical dilemma, supported by the points you made in your paper.
Style Your World With Color
Let your imagination run wild with these easy-to-pair colors.View Article
See how the colors in your closet help determine your mood.View Article
Explore a range of deep greens with the year's "it" colors.View Article
Let your clothes speak for themselves with this powerhouse hue.View Article
- Jupiterimages/Photos.com/Getty Images | http://classroom.synonym.com/write-paper-ethical-dilemmas-4224.html |
4.1875 | A Lunar Nuclear Reactor
Tests prove the feasibility of using nuclear reactors to provide electricity on the moon and Mars.
Researchers at NASA and the Department of Energy recently tested key technologies for developing a nuclear fission reactor that could power a human outpost on the moon or Mars. The tests prove that the agencies could build a “safe, reliable, and efficient” system by 2020, the year NASA plans to return humans to the moon.
A fission reactor works by splitting atoms and releasing energy in the form of heat, which is converted into electricity. The idea for using nuclear power in space dates back to the late 1950s, when they were considered for providing propulsion through Project Orion. In the 1960s a series of compact, experimental space nuclear reactors were developed by NASA under the Systems Nuclear Auxiliary Power program. But public safety concerns and an international treaty banning nuclear weapons in space stopped development.
Now nuclear power is being considered for lunar and Mars missions because, unlike alternatives such as solar power, it can provide constant energy, a necessity for human life-support systems, recharging rovers, and mining for resources. Solar power systems would also require the use of energy storage devices like batteries or fuel cells, adding unwanted mass to the system. Solar power is further limited because the moon is dark for up to 14 days at a time and has deep craters that can obscure the sun. Mars is farther away from the sun than either the Earth or the moon, so less solar power can be harvested there.
The new nuclear power system is part of a NASA project started in 2006, called Fission Surface Power, that is examining small reactors designed for use on other planets. While nuclear power remains controversial, the researchers say that the reactor would be designed to be completely safe and would be buried a safe distance from the astronauts to shield them from any radiation it would generate.
The recent tests examined technologies that would see a nuclear reactor coupled with a Stirling engine capable of producing 40 kilowatts of energy–enough to power a future lunar or Mars outpost.
“We are not building a system that needs hundreds of gigawatts of power like those that produce electricity for our cities,” says Don Palac, the project manager at NASA Glenn Research Center in Cleveland, OH. The system needs to be cheap, safe, and robust and “our recent tests demonstrated that we can successfully build that,” says Palac.
To generate electricity, the researchers used a liquid metal to transfer the heat from the reactor to the Stirling engine, which uses gas pressure to convert heat into the energy needed to generate electricity. For the tests, the researchers used a non-nuclear heat source. The liquid metal was a sodium potassium mixture that has been used in the past to transfer heat from a reactor to a generator, says Palac, but this is the first time this mixture has been used with a Stirling engine.
“They are very efficient and robust, and we believe [it] can last for eight years unattended,” says Lee Mason, the principal investigator of the project at Glenn. The system performed better than expected, Palac says, generating 2.3 kilowatts of power at a steady pace.
The researchers also developed a lightweight radiator panel to cool the system and dissipate the heat from the reactor. The prototype panel is approximately six feet by nine feet–one-twentieth the size required for a full-scale system. Heat from a water-cooling system is circulated to the radiator where it dissipates.
The researchers tested the radiator panel in a vacuum chamber at Glenn that replicates the lack of atmosphere and the extreme temperatures on the moon–from over 100 degrees Celsius during the day to below 100 degrees Celsius at night. The panel dissipated six kilowatts of energy, more than expected–a “very successfully test,” says Palac. On the moon, the panel must also survive the dusty environment cause by the regolith.
Lastly, the researchers tested the performance of the Stirling alternator in a radiation environment at Sandia National Laboratories in Albuquerque, NM. The objective was to test the performance of the motor, ensuring that the materials would not degrade. The alternator was subjected to 20 times the amount of radiation it would expect to see in its lifetime and survived without any significant problems.
Mason says that the tests are very important in showing the feasibility of the system and that the next step is for the researchers to conduct a full system demonstration, by combining a non-nuclear reactor simulator with the Stirling engine and radiator panel. He says that these tests should be completed in 2014.
The researchers are also working on the power transmission and electronics of the system. “A lunar base needs lots of power for things like computers, life support, and to heat up rocks to get out resources like oxygen and hydrogen,” says Ross Radel, a senior member of the technical staff and part of the advanced nuclear concepts group at Sandia. His group is working on the systems dynamic analysis, a computer model that predicts how the reactor will perform during testing. “Nuclear is a stepping stone to move further out into manned space exploration,” says Radel.
“It is a fascinating project and the only possible method of providing power for a manned trip to Mars,” says Daniel Hollenbach, a researcher in the nuclear science and technology division at Oak Ridge National Laboratory, who was not involved in the project.
Mason says that nuclear fission is one of a number of concepts being tested as a power source for human missions to the moon and Mars, and if selected, he says the technology could be deployed by 2020. | https://www.technologyreview.com/s/414770/a-lunar-nuclear-reactor/ |
4.40625 | High School: Statistics and Probability
Interpreting Categorical and Quantitative Data HSS-ID.A.4
4. Use the mean and standard deviation of a data set to fit it to a normal distribution and to estimate population percentages. Recognize that there are data sets for which such a procedure is not appropriate. Use calculators, spreadsheets, and tables to estimate areas under the normal curve.
Students should already know that the distribution of data can take many forms. It can be symmetric, skewed, distributed uniformly, or follow a normal distribution, also known as a bell curve (think Liberty Bell, not jingle bell), also known as a Gaussian distribution. They don't have to know why a normal distribution has so many different names, although it couldn't hurt.
Students should know that we can describe normal distributions as frequency distributions by expressing the data points as percents instead of true values. For example, a cookie factory can produce and package 20,000 boxes of cookies in a month. Each box of cookies is supposed to weigh 22 ounces, but no cookie is perfect (although we'll argue that every cookie is perfect). The following histogram with 10 bins shows the actual weight of the cookie boxes in a month's worth of production. The data has a mean of 22 and a standard deviation of 1.0.
When given this data in the form of a table, students should be able to find the percentages or probabilities for each value. This results in a relative frequency distribution, where the y-axis of the histogram is between 0 and 1 (or 0 and 100%) and the sum of all the percentages is equal to 1 (or 100%).
We can fit a bell curve to this distribution.
When a normal curve is represented as a continuous line as the line in the figure above, it is called a continuous distribution. The area under the curve of the continuous distribution is always equal to 1.0 (just like if we add up all the percentages in the table above, the sum is 100%).
Students should know when it makes sense to talk about many values in terms of a continuous distribution. The weight of the cookie box, for instance, could be anywhere from 17 to 27 ounces (or less, or more), and the weight does not need to fall on an integer value. It could be 21.87 or 22.9 or 0 or 82,729. Now that's a lot of cookie.
Assuming a normal distribution, students should be able to approximate the shape of the continuous distribution given the average and the standard deviation. As the standard deviation increases, the bell shape begins to flatten out because a greater standard deviation suggests the data is spread out more from the mean.
Students should also know that 68% of the data will fall in between the points of inflection (which are exactly ±σ away from the mean). If we increase the distance to 2 standard deviations from the mean (±2σ), we will capture 95% of the data, and moving three standard deviations away captures 99.7% of the data. This is called the empirical rule.
The Z-score is the number of standard deviations a data point is away from the mean. It's a useful way to normalize all normal distributions. (And you thought they couldn't get more normal.) Students should be able to calculate a Z-score using the following formula.
Here, μ is the true mean, σ is the standard deviation, and x is the data point in question. If we pick a cookie box that weighs 25 ounces and we know that μ = 22 and σ = 1.0, we can determine the Z-score as:
The weight of this box is 3 standard deviations from the mean. If we know that 99.7% of the data lie within three standard deviations of the mean, what does that suggest about this box of cookies? It probably means we scored cookie box gold!
Students should be able to find the area under a portion of the curve (say, if our chances of finding a cookie box that weighs only 25 ounces or more), using the Z-score and a table to do it. More than that, they should understand that this area represents the probability that a random data point will fall within the described region.
Remind students that σ and Z are different, and to be careful about which table they're using to find the area under the curve (some tables are cumulative starting from -∞, and others start at the mean). They could also be reminded that the area under the entire curve is always 1.
Common sense is never over-rated. With all these variables and numbers and tables, it's easy to get confused. We don't need a calculator or a table to figure out that Prob(Z ≤ 0) = 0.5 because the chances of a random data point being less than the mean (Z ≤ 0) or greater than the mean (Z ≥ 0) are each 50%. If they understand what these variables and numbers and tables actually do, students are less likely to make silly errors and perform unnecessary calculations.
Here's a video resource teachers can use to explain normal distribution curve.
- ACT Math 6.4 Pre-Algebra
- ACT Math 6.5 Pre-Algebra
- Mean, Median, and Mode
- Normal Distribution Curve
- CAHSEE Math 6.4 Algebra and Functions
- CAHSEE Math 6.4 Algebra I
- CAHSEE Math 6.4 Mathematical Reasoning
- CAHSEE Math 6.4 Measurement and Geometry
- CAHSEE Math 6.4 Number Sense
- CAHSEE Math 6.4 Statistics, Data, and Probability I
- CAHSEE Math 6.4 Statistics, Data, and Probability II
- CAHSEE Math 6.5 Statistics, Data, and Probability I
- CAHSEE Math 6.5 Statistics, Data, and Probability II | http://www.shmoop.com/common-core-standards/ccss-hs-s-id-4.html |
4.0625 | HTML• HTML stands for Hypertext Markup Language. HTML is which makes whatever documents become a link or html helps documents to be posted up on the internet as a type of computer language. As simple as it looks; however, to create a website by html requires complicated work, such as coding. Codes are constructed with tags and these tags are placed into a tree.
The Function• HTML, the function of it is to link websites together and it is possible consists graphic and images. HTML can also be displayed as many different ways and styles as the designers want. Furthermore; HTML contains a lot of tags and each of the tag has its own function. For example, Doctype is for defining document types, <1> to <6> is for heading and ect…
The Attributes• The Attributes of HTML are Core, Class, Style, Title, Lang, and DIR. Core is and ID which identifies an element in a document. Class is to specifies elements to be in one or more classes. Style is used for creating texts, colors or background. Title specifies an extra information of elements. Lang is simply a language that utilized in a webpage. DIR is known as directory that set the webpages in what way to be read.
Creating an HTML• To create a HTML element, users always must start a tag and end with one. For example; <p> This is a paragraph </p> (This is a paragraph)• Forgetting ending tags after opening tags will result errors and affects the use of the webpages.
• This is headingExamples: • This is heading • This is heading<h1>This is heading 1</h1> • This is heading<h2>This is heading 2</h2> • This is heading<h3>This is heading 3</h3><h4>This is heading 4</h4><h5>This is heading 5</h5><h6>This is heading 6</h6>
• This is a paragraphExamples: • This is a paragraphExamples of paragraphs: • This is a paragraph<body><p>This is a paragraph.</p><p>This is a paragraph.</p><p>This is a paragraph.</p></body></html>
To create a • HTML Tutorial This is a linkhyperlink, HTML shouldlook like: to a page on this website.<html><body> • Google This is a link to a<p> website on the World Wide<a href="default.asp">HTMLTutorial</a> This is a link to a page on Web.this website.</p><p><ahref="http://www.google.com/">Google</a> This is a link to a website on theWorld Wide Web.</p></body></html> | http://www.slideshare.net/halfofdemon/khoa-dang-kay |
4.46875 | Introduction of the Materials
The main way children are introduced to the materials in the classroom is through careful presentation. A presentation is a time when the teacher slowly and precisely uses the material in its intended way while an individual or small group of children observe. During such a presentation unnecessary words and movements are avoided and actions are broken into discernible steps in order to increase understanding and the chance for success when the child uses the materials later. A particular point of interest may also be shown to attract the child to the materials.
At times it is appropriate and desirable for the teacher to offer some instruction to the child. This usually occurs at a separate occasion after times of repeated concentrated work with the materials has been observed. The teacher may then re-present the exercise in order to show variations or extensions or to help the child learn the terminology involved.
Presentation Videos of Montessori Materials
InfoMontessori.com has a number of videos that show how materials are demonstrated in the classroom. These video include clear demonstration of the three period lesson in the demonstration. See them all here: | http://montessoriconnections.com/about-montessori-education/introduction-of-the-materials/ |
4.34375 | If we think of a wave spreading out from a rock that is thrown into a pond, the further from the source, the bigger the circle formed by the wave. As the circle gets bigger, its total length (circumference) also gets bigger.
Spreading loss occurs because the total amount of energy in a wave remains the same as it spreads out from a source. (We are neglecting sound absorption for the moment.) When the circle of a surface wave gets bigger the energy spreads to fill it. Therefore, the energy per unit length of the wave must get smaller. The height of the surface wave (amplitude) decreases as the energy per unit length of the wave crest gets smaller.
You can see something similar to spreading loss when you blow bubbles with chewing gum. Have you ever watched the bubble as it grows bigger? How does it change? Just as the total amount of energy in a sound wave doesn’t change as it spreads out, the total amount of chewing gum doesn’t change as the bubble gets bigger. This means that as the bubble grows, the walls of the bubble must get thinner and thinner. The thickness of the gum is similar to the amplitude of the sound wave. Just as the gum gets thinner as the bubble gets bigger, the amplitude of the sound wave decreases as it spreads out.
As surface waves spread out on the surface of a body of water, such as a pond or ocean surface, the amplitude gets smaller rapidly. This is called cylindrical spreading. Waves that spread out in all directions from a sound source, such as one in the middle of the ocean, get smaller even more rapidly than surface waves spreading out horizontally on a pond or ocean surface. This is called spherical spreading.
The following table compares the relative intensity and amplitude of sound waves at one meter from the source to their values at greater distances for cylindrical and spherical spreading.
|Distance from Source||Relative Intensity||Relative Amplitude|
The intensity and amplitude decrease very rapidly for waves spreading out in all directions from a source at mid-depth in the ocean. Sound cannot propagate uniformly in all directions from a source in the ocean forever. Beyond some range the sound will hit the sea surface or sea floor, and the spreading will become approximately cylindrical. | http://www.dosits.org/science/soundmovement/soundweaker/spreading/ |
4.09375 | Video: Step by Step Assembly
STEM Extensions & Standards Alignment
Compare and contrast leaf shapes and arrangements and discover one of nature's most amazing phenomena - photosynthesis (simplified for kids). Each participant will make and take a gorgeous pressed leaf coaster of their very own! Our kit includes everything you need - leaves, plexiglass, foil tape, and rubber feet.
Ages 7 and up. (5 & 6 year olds will need a little help with the foil tape!)
Unit Goals and Concepts:
- Identify the parts of a leaf and their functions.
- Classify leaves and trees based of different criteria.
- Create a leaf coaster using a variety of different leaves.
- Understand photosynthesis (it's easy because we've simplified it for kids and included a worksheet for them to follow along with!)
- An assortment of leaves (usually Oak, Willow, Ginkgo, Rose, and Fern), plus all the other supplies you'll need for each participant to create a pressed leaf coaster.
- Our exclusive instructor's activity guide that provides everything instructors need to teach about leaves, leaf shapes and photosynthesis, plus our reproducible activity sheet.
To View and Print a PDF Version of Standards - Click Here
General: National Science Education Standard NS.K-4.3 and NS.5-8.3 Life Science.
Content Standard C: The Characteristics of Organisms (K-4)
Each plant has different structures that serve different functions in growth, survival, and reproduction.
The Life Cycles of Organisms (K-4)
Plants closely resemble their parents.
Many characteristics of an organism are inherited from the parents of the organism (leaves).
Reproduction and Heredity (5-8)
The characteristics of an organism can be described in terms of a combination of traits. Some traits are inherited and others result from interactions with the environment.
Diversity and Adaptations of Organisms (5-8)
Millions of species of animals, plants, and microorganisms are alive today. Although different species might look dissimilar, the unity among organisms becomes apparent from an analysis of internal structures, the similarity of their chemical processes, and the evidence of common ancestry.
Specific (California standards):
(K.2a) Students know how to observe and describe similarities and differences in the appearance and behavior of plants and animals.
(K.4d) Compare and sort common objects by one physical attribute.
(1.2b) Students know both plants and animals need water, animals need food, and plants need light.
(1.2e) Students know roots are associated with the intake of water and soil nutrients and green leaves are associated with making food from sunlight.
(1.4a) Draw pictures that portray some features of the thing being described.
(1.4b) Record observations and data with pictures, numbers, or written statements.
(2.2d) Students know there is variation among individuals of one kind within a population.
(2.4d) Write or draw descriptions of a sequence of steps, events, and observations.
(2.4e) Construct bar graphs to record data, using appropriately labeled axes.
(3.3a) Students know plants and animals have structures that serve different functions in growth, survival, and reproduction.
(5.2e-g) Students know the steps of photosynthesis involving water, sugar, oxygen and carbon dioxide. | https://www.nature-watch.com/pressed-leaf-coaster-activity-kit-p-33.html |
4.09375 | Routers use routing algorithms to find the best route to a destination. When we say "best route," we consider parameters like the number of hops (the trip a packet takes from one router or intermediate point to another in the network), time delay and communication cost of packet transmission.
This content is not compatible on this device.
Based on how routers gather information about the structure of a network and their analysis of information to specify the best route, we have two major routing algorithms: global routing algorithms and decentralized routing algorithms. In decentralized routing algorithms, each router has information about the routers it is directly connected to -- it doesn't know about every router in the network. These algorithms are also known as DV (distance vector) algorithms. In global routing algorithms, every router has complete information about all other routers in the network and the traffic status of the network. These algorithms are also known as LS (link state) algorithms. We'll discuss LS algorithms in the next section. | http://computer.howstuffworks.com/routing-algorithm1.htm |
4 | Beacon Lesson Plan Library
Exercise: The Right Stuff
Bay District Schools
Students learn regular exercise keeps the body strong and healthy. They make an exercise chain and practice the activities written on the links.
The student understands positive health behaviors that enhance wellness.
The student knows and practices good personal health habits.
-Suggested CD, [Let's Hop], Colgate, Educational Activities, Inc. 1994 (any exercise or fitness songs will do)
-Suggested book, [Johnathan And His Mommy], Small, Little Brown & Company, 1994
-Song "Everybody Needs A Heart," [Heart Power], AHA, 1996 (see Weblinks)
-Poem "Easy as 1, 2, 3!," [Heart Power], AHA, 1996 (see Weblinks)
-Construction paper, cut 8”x 1” ,any color, 1 per child
1. Cut construction paper into 8” x 1” strips. Each child needs two strips.
2. Have the CD, "Let's Hop," and the book, [Johnathan And His Mommy], out and ready to use.
3. Gather poem and song charts.
4. Make sure each child has 1 bottle of glue, the science journal, pencils and a package of crayons.
5. Write or type lyrics and give a copy to each student.
*This is lesson number six, day eight in the Happy, Healthy Me Unit
1. Sing the song, "Everybody Needs a Heart" and recite the poem "Easy as 1, 2, 3!" Review the verse about nutrients and food and how it helps the body from the previous lessons. Now look at the other verse about daily exercise and moving around enough. Relate that to the next part they will be learning about keeping their bodies healthy and fit.
2. Ask: What is exercise? (Exercise is physical activity.) Why is it important to the body? Explain that exercise is a positive health habit that helps strengthen the heart and keeps muscles and bones strong. It also helps to burn calories and reduce stress. When students are playing, they are exercising too! (That is why I encourage my children to play outside so much and not just sit around and watch TV or play video games.) You need to make exercise a daily health habit.
3. Read [Johnathan And His Mommy]. Discuss the different exercises Jonathan and his mother perform. Ask students to stand up and perform the different exercises that are described in the pages.
4. Tell them that physical activities, or exercise, helps the body in different ways. For example some exercises help keep the body flexible, or easy to bend. Have students test their flexibility with this activity: Sit on the floor with their legs stretched out in front of them. Lean forward and try to touch their toes with their fingers. Ask them to remember exactly how far they were able to stretch. Now have them stand up and put one leg out straight behind them with the foot flat on the floor. Lean forward slightly. Do the same for the other leg. Now have students sit back down on the floor and try to touch their toes again. How much farther can they stretch this time? Children will find that with their leg muscles stretched, their flexibility increased, and they could reach farther.
5. Explain that some exercise helps strengthen muscles and other exercises increase their endurance. That means the muscles can do something for a longer time. An example of this would be activities like chin-ups, push-ups, and weight lifting. (Spread out and try some push-ups.)
6. Explain that other exercises help keep the heart and lungs healthy. We know when the lungs and heart are fit, they work more efficiently together, and people can do more without tiring. An example of this would be jogging, race walking, or jumping rope. (Practice jogging in place for two minutes.)
7. Tell students: Let’s review the 3 kinds of exercise we have talked about. They are exercises to help keep the body flexible, exercises that strengthen muscles and increase endurance, and those that keep your heart fit and working properly.
8. Remind students that the heart is a muscle like other muscles in the body and that physical activity helps make muscles stronger and helps them work better. Invite students to suggest simple physical activities they can do in the classroom that they think would keep their bodies fit and feeling good. This needs to be for a short 2-3 minute time frame. Write suggestions on the board. (Examples might be running in place, jumping jacks, toe touches, knee bends, and trunk twists, etc…)
9. Distribute the construction paper strips to the students. Tell them they are going to make an exercise chain just like chains they see on Christmas trees. Ask them to write the name of a physical activity and a number that represents a short time on the construction paper slip. They can use the suggestions from the board, but encourage them to be creative and come up with more. Model an example for them. Write: Run in place, 2 minutes. Then glue (or staple) the ends together to make first circle for the exercise chain. Students should make at least two and the teacher should read them before gluing them onto the chain to assess their understanding of exercise as a healthy habit the promotes wellness for their bodies.
Some acceptable examples from students would be:
*10 jumping jacks
*10 push ups
*Run in place for 2 minutes
*Touch your toes 15 times, etc…(give positive feedback for these examples like: "Alright, 10 jumping jacks would be super because that would be great exercise for our muscles." Or "Yes! running in place for 2 minutes is a great idea because that is good exercise for our lungs." "These are such good ideas. I like how I am seeing short exercises that are good for your muscles and easy to do in class.")
*Some unacceptable examples would be 100 push-ups, jog in place for 1 hour, 75 body twists, etc…. (give constructive feedback like: Do you think 100 push-ups is something we could do that is good for your muscles? Why don’t you rethink that? I'm not sure 50 stretches would be good for the body. What else could you do?)
*Then help students join circles to make a chain. *Hang the exercise chain in the classroom.
10. During the day when children need a movement break or attention getter, go to the chain and tear off one of the circles at random. Read the activity and do it together. After the activity is complete, review what exercise means for the body.
11. Now introduce the song "Let's Hop" on the CD. Sing with the song several times until the students become familiar with it. Ask what this song has to do with today’s lesson? Distribute copies of this song to students to illustrate and keep in their portfolios.
12. Assign students to make an entry on page 4 of their science journals titled: Exercise.
Put this sentence starter on the board for those who need it:
Instead of watching TV, I could ____________________because I know
exercise________________________. I should do this at least ___times per week.
Complete sentence and illustrate.
13. Have students share journal entries with the person sitting next to them. Keep journal in the portfolio.
Formatively assess students' understanding of positive health behaviors that enhance wellness by reading the exercise chain strips each student makes and giving feedback. See step #9 in the procedures for feedback examples. Allow students to correct any mistakes.
Also the teacher is looking for students to know good personal health habits by completing the journal entry. She is specifically looking for a physical activity that explains why exercise is good for the body.
1. Invite the physical education teacher or coach to talk to the class about physical activities and their benefits. Also when you start the Happy Healthy Me unit, ask the PE teacher to focus lessons around stretching, push-ups, jogging in place, and taking the heart rate. Students could chart their growth from Day 1 of the unit, chart again in this lesson, and then chart again at the end of the unit.
2. Art Center: Using old magazines, catalogs and newspapers, have children find pictures of people participating in physical activity. Have them cut out pictures and glue them onto a poster board to make a collage.
3. The Beacon Unit Plan associated with this lesson can be viewed by clicking on the link located at the top of this page or by using the following URL: http://www.beaconlearningcenter.com/search/details.asp?item=2945. Once you select the unit’s link, scroll to the bottom of the unit plan page to find the section, Associated Files. This section contains links to the Unit Plan Overview, Diagnostic and Summative Assessments, and other associated files (if any).
Use this link to obtain copies of the songs, "Everybody Needs A Heart" and "Easy As 1, 2, 3."A Message From Your Heart | http://www.beaconlearningcenter.com/lessons/lesson.asp?ID=1712 |
4.09375 | Biography of John F. Kennedy
Biography of Jacqueline Kennedy
Written for upper elementary to adult readers, these narratives summarize the life and legacy of the 35th president of the United States and his wife.
Lesson Plan: Political Debates: Advising a Candidate
Students analyze excerpts from the first Kennedy-Nixon debate (September 26, 1960) and a memo assessing the debate from one of Kennedy's advisers. They then watch a current political debate to consider the strengths and weaknesses of the candidate they support.
Lesson Plan: Recipe for an Inaugural Address
Students consider what "ingredients" might go into the speech that will launch a President's term in office as they examine some of the most memorable inaugural addresses of the past.
Lesson Plan: Red States, Blue States: Mapping the Presidential Election
Students analyze the results of the 1960 election, collect data for a recent presidential election, and identify changes in voting patterns.
The President's Desk: A Resource Guide for Teachers, Grades 4-12
Invite your students to take a seat at The President's Desk and discover what it means to hold the highest office in the land. This online interactive exhibit features JFK's treasured mementos and important presidential records. Primary sources ranging from recordings of meetings in the Oval Office to family photographs populate the site and provide an engaging and fascinating look into John F. Kennedy's life and presidency. The President's Desk Resource Guide provides an overview of the Desk and suggested curriculum-relevant lesson plans and activities. To access the President's Desk interactive exhibit: http://microsites.jfklibrary.org/presidentsdesk
1963: The Struggle for Civil Rights
Bring the pivotal events of the civil rights movement in 1963 to life for your students through more than 230 primary sources ranging from film footage of the March on Washington and letters from youth advising the president to JFK’s landmark address to the American people and secret recordings of behind-the-scenes negotiations on civil rights legislation. To foster your students' understanding of this era, lesson plans on each of the seven topics are available in the "For Educators" section of the website.
Lesson Plan: A President's Day
If you are elected to the nation's highest office, what are you actually expected to do? Spend a day at the White House with John F. Kennedy to learn about some of the president's most important roles and responsibilities.
Integrating Ole Miss
Students witness civil rights history firsthand through primary source material. Includes guiding questions for classroom activities and assignments.
Leaders in the Struggle for Civil Rights
These letters and telegrams from key figures help tell the story of the civil rights movement during the Kennedy years. Documents include communications from James Farmer, Martin Luther King, Jr., John Lewis, A. Philip Randolph, Bayard Rustin, Roy Wilkins, and Whitney Young.
Lesson Plan: Why Choose the Moon?
Using primary source materials, students investigate the motivation for President Kennedy's ambitious space program.
Americans in Space
Primary source material and classroom activities reveal why exploring space was a priority for the Kennedy administration.
JFK in History
This section of the website contains topic guides on the significant events that occurred during President Kennedy's years in office. These essays are intended to give an overview of challenges and issues that defined Kennedy's administration, and include relevant primary source material.
Annotated Bibliographies about American History
Annotated bibliographies of both recommended biographies and literature about American history. Includes guidelines for critically analyzing biographies and history-based literature. | http://www.jfklibrary.org/Education/~/link.aspx?_id=78C5B8C9F91646349189B0172BAECBBD&_z=z |
4.46875 | Winnipeg general strike
The Winnipeg general strike of 1919, although it failed, was one of the most famous and influential strikes in Canadian history.
Labour leaders complained that many Winnipeg companies had enjoyed enormous profits on World War I contracts, but wages were not high enough, and working conditions were dismal and the men had no voice in the shops.
In March 1919 labour delegates from across Western Canada convened in Calgary to form a branch of the "One Big Union", with the intention of earning rights for Canadian workers through a series of strikes. Their goal was to mobilize workers (including those who already belonged to established unions), including all different trades, skill levels, and ethnicities, giving them class solidarity and aggressive leadership. Business leadership controlled the political system in Manitoba, and used force to break the strike and effectively destroy the One Big Union.
The immediate post-war period in Canada was not a time of peace. Social tensions grew as soldiers returned home to find large numbers of immigrants crowded into cities and working at their former jobs. High rates of unemployment among returned soldiers compounded their resentment towards the immigrants. Along with the soldiers, the Spanish flu was brought back from Europe, creating a mass illness within the country.
Canadian Prime Minister Robert Borden attended the Paris Peace Conference that concluded the Great War and was concerned primarily for his government, due to the Russian revolution that began more than a year before the settlement and concern that Bolshevism would potentially spread to North America. Canada’s large immigrant population was thought to hold strong Bolshevist leanings. Their fears of a possible uprising led to increased efforts to control radicals and immigrants at home. Threats and incidents of strike action, which could be considered radical criticism, were thought to require prompt, harsh responses.
Soldiers returned home desiring jobs and a normal lifestyle again only to find factories shutting down, soaring unemployment rates, increasing bankruptcies and immigrants taking over the veterans' former jobs, which caused social tensions. The cost of living was raised due to the inflation caused by World War I, making it hard for families to live above poverty. Another component which caused the strike was the working conditions of many factories that upset the employees, thus pushing them to make the change that would benefit them. Railways in particular were put in the prairie climate, and many of the employees were hurt around the mountains due to rock falls and the misuse of explosives. Sleeping there, the workers stayed in tents with unsanitary and overcrowded bunkhouses.
At first, many workers liked the ideal wage pay, but the company deducted charges for staying overnight, transportation and blanket rental, making the workers motivated to revolt against the company. After three months of unproductive negotiations between the employers of the Winnipeg builders exchange and the union, worker frustration grew. The city council's new proposal to the workers was unsatisfactory to the four departments, electrical workers took action and a strike was established. Waterworks and fire department employees joined a few days later. Strikers were labelled as Bolsheviks who were attempting to undermine Canada. The city council viewed this as unacceptable and thus dismissed the striking workers. This did not discourage the latter; instead, other civic unions joined the strike out of sympathy, which was an important feature of twentieth century social history.
From wages to the ability to strike
On May 13, City Council gathered again to review the proposed agreement issued by the strikers. Once again, City Council did not accept the proposal without their own amendments, specifically the Fowler Amendment, which read that "all persons employed by the City should express their willingness to execute an agreement, undertaking that they will not either collectively or individually at any time go on strike but will resort to arbitration as a means of settlement of all grievances and differences which may not be capable of amicable settlement." This amendment incensed the civic employees further, and by Friday, May 24, an estimated total of 6,800 strikers from thirteen trades had joined the strike.
Fearing that the strike would spread to other cities, the Federal Government ordered Senator Gideon Decker Robertson to mediate the dispute. After hearing both sides, Robertson settled in favour of the strikers and encouraged City Council to accept the civic employee's proposal. Bolstered by their success, the labour unions would use striking again to gain other labour and union reforms.
1919 General Strike
In Winnipeg, workers within the building and metal industries attempted to strengthen their bargaining ability by creating umbrella unions, the Building Trade Council and Metal Trade Council respectively, to encompass all metal and building unions. Although employers were willing to negotiate with each union separately, they refused to bargain with the Building and Metal Trade Councils, disapproving of the constituent unions that had joined the umbrella organization, and citing employers' inability to meet proposed wage demands. Restrictive labour policy in the 1900s meant that a union could be recognized voluntarily by employers, or through strike action, but in no other way. Workers from both industrial groupings therefore struck to gain union recognition and to compel recognition of their collective bargaining rights.
The Building and Metal Trade Councils appealed to the Trades and Labour Union, the central union body representing the interests of many of Winnipeg's workers, for support in their endeavours. The Trades and Labour Union, in a spirit of solidarity, voted in favour of a sympathetic strike in support of the Building and Metal Trade Councils. Ernest Robinson, secretary of the Winnipeg Trade and Labour Union, issued a statement that “every organization but one has voted in favour of the general strike” and that “all public utilities will be tied-up in order to enforce the principle of collective bargaining". By suspending all public utilities, the strikers hoped to shut down the city, effectively forcing the strikers’ demands to be met. The complete suspension of public utilities, however, would prove impossible. The Winnipeg police, for example, had voted in favour of striking but remained on duty at the request of the strike committee to prevent the city from being placed under martial law. Other exceptions would follow.
At 11:00 a.m. on Thursday May 15, 1919, virtually the entire working population of Winnipeg had gone on strike. Somewhere around 30,000 workers in the public and private sectors walked off their jobs. Even essential public employees such as firefighters went on strike, but returned midway through the strike with the approval of the Strike Committee.
Although relations with the police and City Council were tense, the strike was non-violent in its beginning stages until the confrontation on Bloody Saturday.
The local newspapers, the Winnipeg Free Press and Winnipeg Tribune, had lost the majority of their employees due to the strike and took a decidedly anti-strike stance. The New York Times front page proclaimed "Bolshevism Invades Canada." The Winnipeg Free Presscalled the strikers "bohunks," "aliens," and "anarchists" and ran cartoons depicting radicals throwing bombs. These anti-strike views greatly influenced the opinions of Winnipeg residents. However, the majority of the strikers were reformist, not revolutionary. They wanted to amend the system, not destroy it and build a new one.
When certain unions refused to comply with various demands they were dismissed and replaced without any second chances. In regards to this, the Federal government opposed the dismissal of the Winnipeg police force and afterwards refused to step in when the police force was dismissed by the city thus creating the workforce called the "specials".
Through a greater perspective, the most opposed to the strike was the state including three levels of government: federal, provincial and municipal. The opposition could have been more efficient if they coordinated their policies and deals with each other rather than gradually working into the agreement and not being the total opposition that they were labelled in the first place. At a local level, politicians showed sympathy for the strikers making them neither a monolith nor unalterably an enemy. The federal government's only direct interest in the general strike other than calls from the local authorities was keeping the railroads and post office running.
A counter-strike committee, the "Citizens' Committee of One Thousand", was created by Winnipeg's elite, among whom were AJ Andrews, James Bowes Coyne, Isaac Pitblado and Travers Sweatman, all four of whom would later co-prosecute the sedition. The Committee declared the strike to be a violent, revolutionary conspiracy by a small group of foreigners also known as "alien scum". On June 9, at the behest of the Committee, the City of Winnipeg Police Commission dismissed almost the entire city police force for refusing to sign a pledge promising to neither belong to a union nor participate in a sympathetic strike. The City replaced them with a large body of untrained but better paid special constables who sided with the employers. Within hours, one of the special constables, a much-bemedalled WWI veteran Frederick Coppins, charged his horse into a gathering of strikers and was dragged off his horse and severely pummelled.
As the situation spiraled out of control, the City of Winnipeg appealed for federal help and received extra reinforcements through the Royal Northwest Mounted Police. Despite these drastic measures, control of the streets was beyond the capacity of the city in the period between Tuesday June 9 and Bloody Saturday, June 21.
The Citizens’ Committee saw the strike as a breakdown of public authority and worried that the Strike Committee was attempting to overthrow the Canadian government. The Citizens' Committee met with federal Minister of Labour Gideon Decker Robertson and Minister of the Interior (and acting Minister of Justice) Arthur Meighen, warning them that the leaders of the general strike were revolutionists. Meighen issued a statement May 24 that he viewed the strike as “a cloak for something far deeper--an effort to ‘overturn’ the proper authority”. In response, he supplemented the army with local militia, the Royal Northwest Mounted Police and special constables. Legislation quickly passed to allow for the instant deportation of any foreign-born radicals who advocated revolution or belonged to any organization opposed to organized government. Robertson ordered federal government employees back to work, threatening them with dismissal if they refused. The two ministers refused to meet the Central Strike Committee to consider its grievances.
On June 10 the federal government ordered the arrest of eight strike leaders (including J.S. Woodsworth and Abraham Albert Heaps). On June 21, about 25,000 strikers assembled for a demonstration at Market Square, where Winnipeg Mayor Charles Frederick Gray read the Riot Act. Troubled by the growing number of protestors and fearing violence, Mayor Gray called in the Royal Northwest Mounted Police who rode in on horseback charging into the crowd of strikers, beating them with clubs and firing weapons. This violent action resulted in the death of two strikers (Mike Sokowolski (shot in the head) and Mike Schezerbanowicz (shot in the legs, later dying of gangrene infection)),35 to 45 people injured (police and strikers) and numerous arrests. Four eastern European immigrants were rounded up at this time (two of them were deported, one voluntarily to the United States and the other to Eastern Europe). This day, which came to be known as “Bloody Saturday”, ended with Winnipeg virtually under military occupation. Interacting with other prisoners that consisted of editors and strikers, police shut down the strikers paper called the Western Labour News and arrested the editors for commentating on the event.
At 11:00 a.m. on June 26, 1919, the Central Strike Committee officially called off the strike and the strikers returned to work.
The eight strike leaders arrested on June 18 were eventually brought to trial. Sam Blumenberg and M. Charitonoff were scheduled for deportation, although only Blumenberg was deported, having left for the United States. Charitonoff appealed to Parliament in Ottawa and was eventually released. Of the other eight leaders, five were found guilty of the charges laid against them. Their jail sentences ranged from six months to two years.
A jury acquitted strike leader Fred Dixon. The government dropped drops of seditious libel against J. S. Woodsworth, whose crime was quoting in the strike bulletin from the Bible. Woodsworth was elected MP in the next federal election as a Labour MP and went on to found and lead the Co-operative Commonwealth Federation, a forerunner of the New Democratic Party.
After the strike many employees had mixed emotions about the solution the mayor provided for them. The metal workers received a reduction from their working week of five hours but did not receive a pay increase. Many of these workers lost their pension rights and a deeper division of working class and business was present. The newly civic employees were obligated to sign an oath promising not to partake in any sympathetic strikes in their future. Among the Bloody Saturday strikers, many lost their jobs and others resumed their previous jobs but were placed at the bottom of the seniority level.
The Royal Commission which investigated the strike concluded that the strike was not a criminal conspiracy by foreigners and suggested that "if Capital does not provide enough to assure Labour a contented existence ... then the Government might find it necessary to step in and let the state do these things at the expense of Capital."
This strike is now considered the largest general strike in Canadian history and debated to be the largest in North America.
Organized labour thereafter was hostile towards the Conservatives, particularly Meighen and Robertson, for their forceful role in putting down the strike. Combined with high tariffs in the federal budget passed in the same year (which farmers disliked), the state security forces' heavy-handed action against the strikers contributed to the Conservatives' heavy defeat in the 1921 election - they lost every one of their seats on the Prairies. The succeeding Liberal government, fearing the growing support for hard left elements, pledged to enact the labour reforms proposed by the Commission. The strikeleaders who had at least faced charges if not served time in prison (such as Woodsworth mentioned above) were applauded as labor's champion and many were elected to serve in provincial and federal governments.
Role of women
The role of women during that time period played an influential part when dealing with the strike. As active citizens, various women were crowds joining the bystanders, sightseers and victims at major rallies and demonstrations. The division of women in the province included the strikers and women called "scabs" that were against the strike and tried every way to end it. Striking women would unplug the telephone operators and the scabs would plug them back in. It was especially hard for the women at home due to the low income and absence of goods and services to survive weekly as well as fully depending on their own salary.
By 1919, women constituted roughly one-quarter of that labour force, mainly working in the service, clerical and retail parts of the economy. Around 500 women workers walked off after the first call of the strike, followed by hundreds more days later. The Young Women's Christian Association provided emergency accommodations to women who lived far away from their job. They accepted women strikers and non-strikers to get through the strike with ease. A major figure rose named Helen Armstrong, who was head of the local branch of the women's labour league, accompanying husband George Armstrong, who was one of the strike leaders. Helen was responsible for the women's kitchen maintained by the women's league to feed the striking women. Male strikers were allowed to come to the kitchen to eat but had to provide a good reason as well as sometimes even paying for their meal. Being arrested and put in jail, Helen made the media with names like "the wild women of the west" and "business manager for the women's union".
Among many other women who were sent to jail, Helen was granted a substantial bail of $1,000. When newspapers and articles commented on the strike and the women involved, the Tribune referenced to many of the rioting women as having accents thus labelling them as foreigners whenever something was published. After the strike concluded many women came out for "ladies day" at Victoria park on June 12 and occupied seats of honour near the front cheering along with J.S. Woodsworth promoting emancipation of women and the equality of the sexes. This event was a catalyst for the equality of women and soon after making women able to vote and to be equal to men.
Studies done by David Bercuson, author of various pieces on radicalism in the westernised society, state that radical unionism was essentially a western phenomenon attributed to the rapid development of a resource-based industrial economy that fostered intense class conflict.
A playwright, Danny Shur, wrote and produced the musical Strike in 2005. He revisited the Winnipeg general strike after stumbling across pictures never before made public. According to Shur, the streetcar was rocking and was not fully tipped over, although police said its overturning was the reason the police acted violently, killing two people. Anti-strikers claim that the tipped streetcar was the catalyst for the violence and the shootings so discovering these photos amazed Shur.
Historian Donald Avery, with knowledge of the ethnic labour movement in Winnipeg, declared that no one has written of the important role of immigrant workers in the Winnipeg general strike of 1919. According to Avery it is clear from the evidence that no non- Anglo Saxon leader played a particularly significant role in the strike.
Another author, J.E Rae, depicted the notion that the strike established Winnipeg's class division so severely in the decades after that Winnipeg had a marked class polarization.
David Yeo and his article "Rural Manitoba view the 1919 Winnipeg general strike" sheds light on the expressed view that the rural community was hostile to the strike and undermined the chances of farmer-labour cooperation's years after the strike.
Mary Horodyski showed that the thousands of women who were active as strikers and striker-breakers. Horodyski states that one in four workers in the city was female and even if they did not participate solely in the strike, they were affected on the sidelines trying to feed protesters while feeding their own families daily. There are different viewpoints in regards to the West and the East of the province of Winnipeg. Historians of the labour revolt in east part of Winnipeg deem the eastern workers as innately conservative.
Many essays by historians show that maritime labour revolt went further than the radical violent battles in the Cape Breton coal fields to range elsewhere in the country. Indifferently on the western side radicalism is played down and depicted strikes as strategies rather than ideological commitments to settle certain demands. There are many viewpoints that historians have that open new perspectives that may have been left out before. With these different and unique feedback about Canadian history, people from the present and the future can understand what really happened in Winnipeg during the famous general strike of 1919.
Commemorations in popular culture
Many of the famous photographs of the strike were by Winnipeg photographer L.B. Foote. Among the remembrances of this event in Canadian popular culture is the song "In Winnipeg" by musician Mike Ford, included in the album Canada Needs You Volume Two. In 2005, Danny Schur created a musical based on the event called Strike! (musical). There is a mural commemorating the General Strike in Winnipeg's Exchange District.
- Francis, Daniel (1984). "The Winnipeg General Strike". History Today 38: 4–8.
- Bumsted, J. M. (1994). The Winnipeg general strike reconsidered. Beaver, 74(3), 27.
- Bumsted, J.M (1994). The Winnipeg General Strike of 1919: An Illustrated Guide. Watson Dwyer Publishing Limited. pp. 2–3. ISBN 0-920486-40-1.
- Francis, D. (1984). "1919: The Winnipeg General Strike," History Today, 34(4), 4
- McCallum, T., & Palmer, B. (2012). "Working-Class History" in The Canadian Encyclopedia
- Fowler Amendment, as quoted in Bumsted, J. M. (1994). The Winnipeg General Strike of 1919: An Illustrated History. Watson Dwyer Publishing Ltd. p. 1. ISBN 0-920486-40-1.
- as quoted in Bercuson, David Jay (1990). Confrontation at Winnipeg: Labour, Industrial Relations, and the General Strike. Montreal & Kingston: McGill-Queen's University Press. p. 62. ISBN 0-7735-0794-9.
- Bumsted, J.M (1994). The Winnipeg General Strike of 1919: An Illustrated Guide. Canada. p. 28. ISBN 0-920486-40-1.
- Mitchell, Tom and James Naylor, "The Prairies: In the Eye of the Storm." In The Workers' Revolt in Canada, 1917-1925 (Toronto: University of Toronto Press, 1998) pp. 176-180.
- Labour / Le Travail, Vol. 13, [Papers from the 1919 Winnipeg General Strike Symposium] (Spring, 1984), pp. 6-10
- name="Bumsted, J. M. 1994"
- 'Labour / Le Travail' Journal of the Canadian Committee on Labour History: "Legal Gentlemen Appointed by the Federal Government: the Canadian State, the Citizens' Committee of 1000, and Winnipeg's Seditious Conspiracy Trials of 1919-1920"
- Kramer, Reinhold; T. Mitchell (2010). When The State Trembled: How A.J. Andrews and the Citizens' Committee Broke the Winnipeg General Strike. Canada: University of Toronto Press. p. 443. ISBN 978-1-4426-4219-5.
- Bumsted, J.M (1994). The Winnipeg General Strike of 1919: An Illustrated Guide. p. 37.
- Larry Gambone and D.J. Alperovitz, They Died For You. A Brief History of Canadian Labour Martyrs, 1903-2006, p. 13-14; Edmonton Bulletin, June 23, 1919, page 1
- Bloody Saturday CBC Television documentary
- Justice H. A. Robson's report, quoted in Fudge, Judy; Tucker, Eric (2004). Labour Before the Law: The Regulation of Workers' Collective Action in Canada, 1900-1948. Toronto: University of Toronto Press. p. 112. ISBN 0-8020-3793-3.
- Horodyski, M. (2009, April 25). Manitoba History: Women and the Winnipeg General Strike of 1919.
- Kevin Brushett. Review of Heron, Craig, ed., The Workers' Revolt in Canada, 1917-1925. H-Canada, H-Net
- Horodyski, M. (2009, April 25). Manitoba History: Women and the Winnipeg General Strike of 1919
- Bercuson, David Jay. "Confrontation at Winnipeg:." Google Books. McGill-Queens University Press 1974, n.d. Web.
- Prokosh, Kevin. "History Revisited." - Winnipeg Free Press. Winnipeg Free Press, 29 Sept. 2011. Web.
- Balawyder, Aloysius. The Winnipeg general strike (Copp-Clark Pub. Co, 1967)
- Bercuson, David. Confrontation at Winnipeg: labour, industrial relations, and the general strike (McGill-Queen's Press-MQUP, 1990)
- Bumsted, J.M. The Winnipeg General Strike of 1919: An Illustrated Guide (Watson Dwyer Publishing Limited, 1994). ISBN 0-920486-40-1.
- Friesen, Gerald. "'Yours In Revolt': The Socialist Party of Canada and the Western Canadian Labour Movement." Labour/Le Travail' (1976) 1#1 pp: 139-157. online
- Kramer, Reinhold, and Tom Mitchell. When the State Trembled: How AJ Andrews and the Citizens' Committee Broke the Winnipeg General Strike (U of Toronto Press, 2010)
- Masters, Donald Campbell. The Winnipeg general strike (University of Toronto Press, 1950), a scholarly history.
- Korneski, Kurt. "Prairie Fire: The Winnipeg General Strike," 'Labour / Le Travail, vol. 45 (Spring 2000), pp. 259–266. In JSTOR
- Stace, Trevor. "Remembering and Forgetting Winnipeg: Making History on the Strike of 1919." Constellations (2014) 5#1 on the historiography of 1919; online.
- Wiseman, Nelson. Social Democracy in Manitoba: a History of the CCF-NDP (U. of Manitoba Press, 1983) | https://en.wikipedia.org/wiki/Winnipeg_General_Strike |
4 | European eels are becoming an increasingly endangered species as stock steeply decline. Wild stocks are currently half of what they were a few years ago. The European research project Eeliad aims to resolve some of the mysteries by analysing the eel’s biology and thus using this information to help conserve European eel stocks.
Eeliad tries to figure out more information on European eels during their marine migration as very little is known about the life and spawning success of silver eels once they escape to the sea. Researchers undertake a large-scale field to determine migration routes and behaviour of silver eels and in addition, to determine ecological factors that influence the number and quality of silver eels. Scientific fishing is the method researchers are working with in order to tag right sized specimens of eels. The animals are measured and weighed and their small bones are analysed according to the age at which eels leave European rivers. The coordinator of Eeliad project, David Righton, found out that eels move from cooler water during the day to warmer water during the night. Furthermore, eels choose to swim in darker areas and make sharper vertical movements. These newly identified migration patterns will surely lead point scientists towards different outcomes about eels’ parasites and diseases.
In the meantime biologists study tags from other research projects that provide further information on Eel’s behaviour. Tags from Ireland, Spain and Sweden have already confirmed that eels can travel up to 45 km per day and swim as deep as 1200 metres.
The knowledge gained from the Eeliad projects will be of direct use to the conservation of eel stocks by improving and changing the way that eel fisheries and habitats are managed across Europe. It will additionally ensure that enough silver eels migrate to their spawning grounds to reproduce and sustain their species. | http://ec.europa.eu/research/infocentre/article_en.cfm?artid=19573 |
4.34375 | The Earth's inner core is the Earth's innermost part and according to seismological studies, it has been believed to be primarily a solid ball with a radius of about 1220 kilometers, or 760 miles (about 70% of the Moon's radius). However, with some recent studies, some geophysicists prefer to interpret the inner core not as a solid, but as a plasma behaving as a solid. It is believed to consist primarily of an iron–nickel alloy and to be approximately the same temperature as the surface of the Sun: approximately 5700 K (5400 °C).
The Earth was discovered to have a solid inner core distinct from its liquid outer core in 1936, by the seismologist Inge Lehmann, who deduced its presence by studying seismographs of earthquakes in New Zealand; she observed that the seismic waves reflect off the boundary of the inner core and can be detected by sensitive seismographs on the Earth's surface. This boundary is known as the Bullen discontinuity, or sometimes as the Lehmann discontinuity. A few years later, in 1940, it was hypothesized that this inner core was made of solid iron; its rigidity was confirmed in 1971.
The outer core was determined to be liquid from observations showing that compressional waves pass through it, but elastic shear waves do not – or do so only very weakly. The solidity of the inner core had been difficult to establish because the elastic shear waves that are expected to pass through a solid mass are very weak and difficult for seismographs on the Earth's surface to detect, since they become so attenuated on their way from the inner core to the surface by their passage through the liquid outer core. Dziewonski and Gilbert established that measurements of normal modes of vibration of Earth caused by large earthquakes were consistent with a liquid outer core. Recent claims that shear waves have been detected passing through the inner core were initially controversial, but are now gaining acceptance.
Based on the relative prevalence of various chemical elements in the Solar System, the theory of planetary formation, and constraints imposed or implied by the chemistry of the rest of the Earth's volume, the inner core is believed to consist primarily of a nickel-iron alloy known as NiFe: 'Ni' for nickel, and 'Fe' for ferrum or iron. Because the inner core is denser than pure iron or nickel (~12.8–13.1 g/cm3) at Earth's inner core pressures, the inner core must contain a great amount of heavy elements with only a small amount of light elements, mainly Si with traces of O. Based on such density a study calculated that the core contains enough gold, platinum and other siderophile elements that if extracted and poured onto the Earth's surface it would cover the entire Earth with a coating 0.45 m (1.5 feet) deep. The fact that precious metals and other heavy elements are so much more abundant in the Earth's inner core than in its crust is explained by the theory of the so-called iron catastrophe, an event that occurred before the first eon during the accretion phase of the early Earth.
Temperature and pressure
The temperature of the inner core can be estimated by considering both the theoretical and the experimentally demonstrated constraints on the melting temperature of impure iron at the pressure which iron is under at the boundary of the inner core (about 330 GPa). These considerations suggest that its temperature is about 5,700 K (5,400 °C; 9,800 °F). The pressure in the Earth's inner core is slightly higher than it is at the boundary between the outer and inner cores: it ranges from about 330 to 360 gigapascals (3,300,000 to 3,600,000 atm). Iron can be solid at such high temperatures only because its melting temperature increases dramatically at pressures of that magnitude (see the Clausius–Clapeyron relation).
A report published in the journal Science concludes that the melting temperature of iron at the inner core boundary is 6230 ± 500 kelvin, roughly 1000 kelvin higher than previous estimates.
The Earth's inner core is thought to be slowly growing as the liquid outer core at the boundary with the inner core cools and solidifies due to the gradual cooling of the Earth's interior (about 100 degrees Celsius per billion years). Many scientists had initially expected that, because the solid inner core was originally formed by a gradual cooling of molten material, and continues to grow as a result of that same process, the inner core would be found to be homogeneous. It was even suggested that Earth's inner core might be a single crystal of iron. However, this prediction was disproved by observations indicating that in fact there is a degree of disorder within the inner core. Seismologists have found that the inner core is not completely uniform, but instead contains large-scale structures such that seismic waves pass more rapidly through some parts of the inner core than through others. In addition, the properties of the inner core's surface vary from place to place across distances as small as 1 km. This variation is surprising, since lateral temperature variations along the inner-core boundary are known to be extremely small (this conclusion is confidently constrained by magnetic field observations). Recent discoveries suggest that the solid inner core itself is composed of layers, separated by a transition zone about 250 to 400 km thick. If the inner core grows by small frozen sediments falling onto its surface, then some liquid can also be trapped in the pore spaces and some of this residual fluid may still persist to some small degree in much of its interior.
Because the inner core is not rigidly connected to the Earth's solid mantle, the possibility that it rotates slightly faster or slower than the rest of Earth has long been entertained. In the 1990s, seismologists made various claims about detecting this kind of super-rotation by observing changes in the characteristics of seismic waves passing through the inner core over several decades, using the aforementioned property that it transmits waves faster in some directions. Estimates of this super-rotation are around one degree of extra rotation per year.
Growth of the inner core is thought to play an important role in the generation of Earth's magnetic field by dynamo action in the liquid outer core. This occurs mostly because it cannot dissolve the same amount of light elements as the outer core and therefore freezing at the inner core boundary produces a residual liquid that contains more light elements than the overlying liquid. This causes it to become buoyant and helps drive convection of the outer core. The existence of the inner core also changes the dynamic motions of liquid in the outer core as it grows and may help fix the magnetic field since it is expected to be a great deal more resistant to flow than the outer core liquid (which is expected to be turbulent).
Speculation also continues that the inner core might have exhibited a variety of internal deformation patterns. This may be necessary to explain why seismic waves pass more rapidly in some directions than in others. Because thermal convection alone appears to be improbable, any buoyant convection motions will have to be driven by variations in composition or abundance of liquid in its interior. S. Yoshida and colleagues proposed a novel mechanism whereby deformation of the inner core can be caused by a higher rate of freezing at the equator than at polar latitudes, and S. Karato proposed that changes in the magnetic field might also deform the inner core slowly over time.
There is an East–West asymmetry in the inner core seismological data. There is a model which explains this by differences at the surface of the inner core – melting in one hemisphere and crystallization in the other., the western hemisphere of the inner core may be crystallizing, whereas the eastern hemisphere may be melting. This may lead to enhanced magnetic field generation in the crystallizing hemisphere, creating the asymmetry in the Earth's magnetic field.
Extrapolating from observations of the cooling of the inner core, it is estimated that the current solid inner core formed approximately 2 to 4 billion years ago from what was originally an entirely molten core. If true, this would mean that the Earth's solid inner core is not a primordial feature that was present during the planet's formation, but a feature younger than the Earth (the Earth is about 4.5 billion years old).
|The Wikibook Historical Geology has a page on the topic of: Structure of the Earth|
- Monnereau, Marc; Calvet, Marie; Margerin, Ludovic; Souriau, Annie (May 21, 2010). "Lopsided Growth of Earth's Inner Core". Science 328 (5981): 1014–1017. Bibcode:2010Sci...328.1014M. doi:10.1126/science.1186212. PMID 20395477
- E. R. Engdahl; E. A. Flynn & R. P. Massé (1974). "Differential PkiKP travel times and the radius of the core". Geophys. J. R. Astron. Soc. 40 (3): 457–463. Bibcode:1974GeoJI..39..457E. doi:10.1111/j.1365-246X.1974.tb05467.x.
- Society, National Geographic (2015-08-17). "core". National Geographic Education. Retrieved 2016-01-30.
- D. Alfè; M. Gillan & G. D. Price (January 30, 2002). "Composition and temperature of the Earth's core constrained by combining ab initio calculations and seismic data" (PDF). Earth and Planetary Science Letters (Elsevier) 195 (1–2): 91–98. Bibcode:2002E&PSL.195...91A. doi:10.1016/S0012-821X(01)00568-4.
- Edmond A. Mathez, ed. (2000). EARTH: INSIDE AND OUT. American Museum of Natural History.
- John C. Butler (1995). "Class Notes - The Earth's Interior". Physical Geology Grade Book. University of Houston. Retrieved 30 August 2011.
- Although another discontinuity is named after Lehmann, this usage still can be found: see for example: Robert E Krebs (2003). The basics of earth science. Greenwood Publishing Company. ISBN 0-313-31930-8.,and From here to "hell ," or the D layer, About.com
- Hung Kan Lee (2002). International handbook of earthquake and engineering seismology; volume 1. Academic Press. p. 926. ISBN 0-12-440652-1.
- William J. Cromie (1996-08-15). "Putting a New Spin on Earth's Core". Harvard Gazette. Retrieved 2007-05-22.
- A. M. Dziewonski and F. Gilbert (1971-12-24). "Solidity of the Inner Core of the Earth inferred from Normal Mode Observations". Nature 234 (5330): 465–466. Bibcode:1971Natur.234..465D. doi:10.1038/234465a0.
- Robert Roy Britt (2005-04-14). "Finally, a Solid Look at Earth's Core". Retrieved 2007-05-22.
- Lars Stixrude; Evgeny Waserman & Ronald Cohen (November 1997). "Composition and temperature of Earth's inner core". Journal of Geophysical Research (American Geophysical Union) 102 (B11): 24729–24740. Bibcode:1997JGR...10224729S. doi:10.1029/97JB02125.
- Eugene C. Robertson (January 2011). "The Interior of the earth". United States Geological Survey.
- Badro, James; Fiquet, Guillaume; Guyot, François; Gregoryanz, Eugene; Occelli, Florent; Antonangeli, Daniele; Matteo (2007). "Effect of light elements on the sound velocities in solid iron: Implications for the composition of Earth's core". Earth and Planetary Science Letters 254: 233–238. Bibcode:2007E&PSL.254..233B. doi:10.1016/j.epsl.2006.11.025.
- Wootton, Anne (September 2006) "Earth's Inner Fort Knox" Discover 27(9): p.18;
- David. R. Lide, ed. (2006–2007). CRC Handbook of Chemistry and Physics (87th ed.). pp. j14–13.
- Anneli Aitta (2006-12-01). "Iron melting curve with a tricritical point". Journal of Statistical Mechanics: Theory and Experiment (iop) 2006 (12): 12015–12030. arXiv:cond-mat/0701283. Bibcode:2006JSMTE..12..015A. doi:10.1088/1742-5468/2006/12/P12015. or see preprints http://arxiv.org/pdf/cond-mat/0701283 , http://arxiv.org/pdf/0807.0187 .
- S. Anzellini; A. Dewaele; M. Mezouar; P. Loubeyre & G. Morard (2013). "Melting of Iron at Earth’s Inner Core Boundary Based on Fast X-ray Diffraction". Science (AAAS) 340 (6136): 464–466. doi:10.1126/science.1233514.
- van Hunen, J.; van den Berg, A.P. (2007). "Plate tectonics on the early Earth: Limitations imposed by strength and buoyancy of subducted lithosphere". Lithos 103 (1–2): 217–235. Bibcode:2008Litho.103..217V. doi:10.1016/j.lithos.2007.09.016.
- Broad, William J. (1995-04-04). "The Core of the Earth May Be a Gigantic Crystal Made of Iron". NY Times. ISSN 0362-4331. Retrieved 2010-12-21.
- Robert Sanders (1996-11-13). "Earth's inner core not a monolithic iron crystal, say UC Berkeley seismologist". Retrieved 2007-05-22.
- Andrew Jephcoat and Keith Refson (2001-09-06). "Earth science: Core beliefs". Nature 413 (6851): 27–30. doi:10.1038/35092650. PMID 11544508.
- Kazuro Hirahara; Toshiki Ohtaki & Yasuhiro Yoshida (1994). "Seismic structure near the inner core-outer core boundary". Geophys. Res. Lett. (American Geophysical Union) 51 (16): 157–160. Bibcode:1994GeoRL..21..157K. doi:10.1029/93GL03289.
- Aaurno, J. M.; Brito, D.; Olson, P. L. (1996). "Mechanics of inner core super-rotation". Geophysical Research Letters 23 (23): 3401–3404. Bibcode:1996GeoRL..23.3401A. doi:10.1029/96GL03258
- Xu, Xiaoxia; Song, Xiaodong (2003). "Evidence for inner core super-rotation from time-dependent differential PKP traveltimes observed at Beijing Seismic Network". Geophysical Journal International 152 (3): 509–514. Bibcode:2003GeoJI.152..509X. doi:10.1046/j.1365-246X.2003.01852.x
- G Poupinet, R Pillet, A Souriau (1983). "Possible heterogeneity of the Earth's core deduced from PKIKP travel times". Nature 305: 204–206. doi:10.1038/305204a0.
- T. Yukutake (1998). "Implausibility of thermal convection in the Earth's solid inner core". Phys. Earth Planet. Int. 108 (1): 1–13. Bibcode:1998PEPI..108....1Y. doi:10.1016/S0031-9201(98)00097-1.
- S.I. Yoshida; I. Sumita & M. Kumazawa (1996). "Growth model of the inner core coupled with the outer core dynamics and the resulting elastic anisotropy". Journal of Geophysical Research B 101: 28085–28103. Bibcode:1996JGR...10128085Y. doi:10.1029/96JB02700.
- S. I. Karato (1999). "Seismic anisotropy of the Earth's inner core resulting from flow induced by Maxwell stresses". Nature 402 (6764): 871–873. Bibcode:1999Natur.402..871K. doi:10.1038/47235.
- Alboussière, T.; Deguen, R.; Melzani, M. (2010). "Melting-induced stratification above the Earth's inner core due to convective translation". Nature 466 (7307): 744–747. arXiv:1201.1201. Bibcode:2010Natur.466..744A. doi:10.1038/nature09257. PMID 20686572.
- "Figure 1: East–west asymmetry in inner-core growth and magnetic field generation." from Finlay, Christopher C. (2012). "Core processes: Earth's eccentric magnetic field". Nature Geoscience 5: 523–524. doi:10.1038/ngeo1516.
- J.A. Jacobs (1953). "The Earth's inner core". Nature 172 (4372): 297–298. Bibcode:1953Natur.172..297J. doi:10.1038/172297a0. | https://en.wikipedia.org/wiki/Inner_core |
4.0625 | Simple Definition of solution
: something that is used or done to deal with and end a problem : something that solves a problem
: the act of solving something
: a correct answer to a problem, puzzle, etc.
Full Definition of solution
1a : an action or process of solving a problemb : an answer to a problem : explanation; specifically : a set of values of the variables that satisfies an equation
2a : an act or the process by which a solid, liquid, or gaseous substance is homogeneously mixed with a liquid or sometimes a gas or solidb : a homogeneous mixture formed by this process; especially : a single-phase liquid systemc : the condition of being dissolved
3 : a bringing or coming to an end or into a state of discontinuity
Examples of solution
Medication may not be the best solution for the patient's condition.
The solution is simple you need to spend less money.
She made a solution of baking soda and water.
a 40 percent saline solution
He rinsed the contact lens with saline solution.
the solution of sucrose in water
Origin of solution
Middle English solucion explanation, dispersal of bodily humors, from Anglo-French, from Latin solution-, solutio, from solvere to loosen, solve
First Known Use: 14th century
SOLUTION Defined for Kids
Definition of solution
1 : the act or process of solving <His solution to the problem was to wait.>
2 : an answer to a problem : explanation <The solution of the math problem is on the board.>
3 : the act or process by which a solid, liquid, or gas is dissolved in a liquid
4 : a liquid in which something has been dissolved <a solution of sugar in water>
Medical Definition of solution
1a: an act or the process by which a solid, liquid, or gaseous substance is homogeneously mixed with a liquid or sometimes a gas or solid—called also dissolutionb: a homogeneous mixture formed by this process
2a: a liquid containing a dissolved substance <an aqueous solution>b: a liquid and usually aqueous medicinal preparation with the solid ingredients solublec: the condition of being dissolved <a substance in solution>
Learn More about solution
Thesaurus: All synonyms and antonyms for "solution" Medical Dictionary: Definition of "solution" Spanish Central: Translation of "solution" Nglish: Translation of "solution" for Spanish speakers Britannica English: Translation of "solution" for Arabic speakers Britannica.com: Encyclopedia article about "solution"
Seen and Heard
What made you want to look up solution? Please tell us where you read or heard it (including the quote, if possible). | http://www.merriam-webster.com/dictionary/solution |
4.21875 | In mathematics, a recurrence relation is an equation that recursively defines a sequence or multidimensional array of values, once one or more initial terms are given: each further term of the sequence or array is defined as a function of the preceding terms.
The term difference equation sometimes (and for the purposes of this article) refers to a specific type of recurrence relation. However, "difference equation" is frequently used to refer to any recurrence relation.
- 1 Examples
- 2 Structure
- 3 Solving
- 3.1 General methods
- 3.2 Solving via linear algebra
- 3.3 Solving with z-transforms
- 3.4 Theorem
- 3.5 Solving non-homogeneous recurrence relations
- 3.6 Solving first-order non-homogeneous recurrence relations with variable coefficients
- 3.7 General linear homogeneous recurrence relations
- 3.8 Solving a first order rational difference equation
- 4 Stability
- 5 Relationship to differential equations
- 6 Applications
- 7 See also
- 8 Notes
- 9 References
- 10 External links
An example of a recurrence relation is the logistic map:
with a given constant r; given the initial term x0 each subsequent term is determined by this relation.
Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of n.
The Fibonacci numbers are the archetype of a linear, homogeneous recurrence relation with constant coefficients (see below). They are defined using the linear recurrence relation
with seed values:
Explicitly, recurrence yields the equations:
We obtain the sequence of Fibonacci numbers, which begins:
- 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
It can be solved by methods described below yielding the closed-form expression, which involves powers of the two roots of the characteristic polynomial t2 = t + 1; the generating function of the sequence is the rational function
A simple example of a multidimensional recurrence relation is given by the binomial coefficients , which count the number of ways of selecting k out of a set of n elements. They can be computed by the recurrence relation
with the base cases . Using this formula to compute the values of all binomial coefficients generates an infinite array called Pascal's triangle. The same values can also be computed directly by a different formula that is not a recurrence, but that requires multiplication and not just addition to compute:
Linear homogeneous recurrence relations with constant coefficients
An order d linear homogeneous recurrence relation with constant coefficients is an equation of the form
where the d coefficients ci (for all i) are constants.
More precisely, this is an infinite list of simultaneous linear equations, one for each n>d−1. A sequence that satisfies a relation of this form is called a linear recurrence sequence or LRS. There are d degrees of freedom for LRS, i.e., the initial values can be taken to be any values but then the linear recurrence determines the sequence uniquely.
The same coefficients yield the characteristic polynomial (also "auxiliary polynomial")
whose d roots play a crucial role in finding and understanding the sequences satisfying the recurrence. If the roots r1, r2, ... are all distinct, then the solution to the recurrence takes the form
where the coefficients ki are determined in order to fit the initial conditions of the recurrence. When the same roots occur multiple times, the terms in this formula corresponding to the second and later occurrences of the same root are multiplied by increasing powers of n. For instance, if the characteristic polynomial can be factored as (x−r)3, with the same root r occurring three times, then the solution would take the form
As well as the Fibonacci numbers, other sequences generated by linear homogeneous recurrences include the Lucas numbers and Lucas sequences, the Jacobsthal numbers, the Pell numbers and more generally the solutions to Pell's equation.
Rational generating function
Theorem: Linear recursive sequences are precisely the sequences whose generating function is a rational function. The denominator is the polynomial obtained from the auxiliary polynomial by reversing the order of the coefficients, and the numerator is determined by the initial values of the sequence.
The simplest cases are periodic sequences, , which have sequence and generating function a sum of geometric series:
More generally, given the recurrence relation:
with generating function
the series is annihilated at ad and above by the polynomial:
That is, multiplying the generating function by the polynomial yields
as the coefficient on , which vanishes (by the recurrence relation) for n ≥ d. Thus
so dividing yields
expressing the generating function as a rational function.
The denominator is a transform of the auxiliary polynomial (equivalently, reversing the order of coefficients); one could also use any multiple of this, but this normalization is chosen both because of the simple relation to the auxiliary polynomial, and so that .
Relationship to difference equations narrowly defined
The second difference is defined as
which can be simplified to
More generally: the kth difference of the sequence an is written as is defined recursively as
(The sequence and its differences are related by a binomial transform.) The more restrictive definition of difference equation is an equation composed of an and its kth differences. (A widely used broader definition treats "difference equation" as synonymous with "recurrence relation". See for example rational difference equation and matrix difference equation.)
Actually, it is easily seen that Thus, a difference equation can be defined as an equation that involves an, an-1, an-2 etc. (or equivalenty an, an+1, an+2 etc.)
Since difference equations are a very common form of recurrence, some authors use the two terms interchangeably. For example, the difference equation
is equivalent to the recurrence relation
Thus one can solve many recurrence relations by rephrasing them as difference equations, and then solving the difference equation, analogously to how one solves ordinary differential equations. However, the Ackermann numbers are an example of a recurrence relation that do not map to a difference equation, much less points on the solution to a differential equation.
From sequences to grids
Single-variable or one-dimensional recurrence relations are about sequences (i.e. functions defined on one-dimensional grids). Multi-variable or n-dimensional recurrence relations are about n-dimensional grids. Functions defined on n-grids can also be studied with partial difference equations.
For order 1, the recurrence
has the solution an = rn with a0 = 1 and the most general solution is an = krn with a0 = k. The characteristic polynomial equated to zero (the characteristic equation) is simply t − r = 0.
Solutions to such recurrence relations of higher order are found by systematic means, often using the fact that an = rn is a solution for the recurrence exactly when t = r is a root of the characteristic polynomial. This can be approached directly or using generating functions (formal power series) or matrices.
Consider, for example, a recurrence relation of the form
When does it have a solution of the same general form as an = rn? Substituting this guess (ansatz) in the recurrence relation, we find that
must be true for all n > 1.
Dividing through by rn−2, we get that all these equations reduce to the same thing:
which is the characteristic equation of the recurrence relation. Solve for r to obtain the two roots λ1, λ2: these roots are known as the characteristic roots or eigenvalues of the characteristic equation. Different solutions are obtained depending on the nature of the roots: If these roots are distinct, we have the general solution
while if they are identical (when A2 + 4B = 0), we have
This is the most general solution; the two constants C and D can be chosen based on two given initial conditions a0 and a1 to produce a specific solution.
In the case of complex eigenvalues (which also gives rise to complex values for the solution parameters C and D), the use of complex numbers can be eliminated by rewriting the solution in trigonometric form. In this case we can write the eigenvalues as Then it can be shown that
can be rewritten as:576–585
Here E and F (or equivalently, G and δ) are real constants which depend on the initial conditions. Using
one may simplify the solution given above as
where a1 and a2 are the initial conditions and
In this way there is no need to solve for λ1 and λ2.
In all cases—real distinct eigenvalues, real duplicated eigenvalues, and complex conjugate eigenvalues—the equation is stable (that is, the variable a converges to a fixed value [specifically, zero]) if and only if both eigenvalues are smaller than one in absolute value. In this second-order case, this condition on the eigenvalues can be shown to be equivalent to |A| < 1 − B < 2, which is equivalent to |B| < 1 and |A| < 1 − B.
The equation in the above example was homogeneous, in that there was no constant term. If one starts with the non-homogeneous recurrence
with constant term K, this can be converted into homogeneous form as follows: The steady state is found by setting bn = bn−1 = bn−2 = b* to obtain
Then the non-homogeneous recurrence can be rewritten in homogeneous form as
which can be solved as above.
The stability condition stated above in terms of eigenvalues for the second-order case remains valid for the general nth-order case: the equation is stable if and only if all eigenvalues of the characteristic equation are less than one in absolute value.
Solving via linear algebra
A linearly recursive sequence y of order n
is identical to
Expanded with n-1 identities of kind this n-th order equation is translated into a system of n first order linear equations,
Observe that the vector can be computed by n applications of the companion matrix, C, to the initial state vector, . Thereby, n-th entry of the sought sequence y, is the top component of .
Eigendecomposition, into eigenvalues, , and eigenvectors, , is used to compute Thanks to the crucial fact that system C time-shifts every eigenvector, e, by simply scaling its components λ times,
that is, time-shifted version of eigenvector,e, has components λ times larger, the eigenvector components are powers of λ, and, thus, recurrent linear homogeneous equation solution is a combination of exponential functions, . The components can be determined out of initial conditions:
Solving for coefficients,
This also works with arbitrary boundary conditions , not necessary the initial ones,
This description is really no different from general method above, however it is more succinct. It also works nicely for situations like
where there are several linked recurrences.
Solving with z-transforms
Certain difference equations - in particular, linear constant coefficient difference equations - can be solved using z-transforms. The z-transforms are a class of integral transforms that lead to more convenient algebraic manipulations and more straightforward solutions. There are cases in which obtaining a direct solution would be all but impossible, yet solving the problem via a thoughtfully chosen integral transform is straightforward.
Given a linear homogeneous recurrence relation with constant coefficients of order d, let p(t) be the characteristic polynomial (also "auxiliary polynomial")
such that each ci corresponds to each ci in the original recurrence relation (see the general form above). Suppose λ is a root of p(t) having multiplicity r. This is to say that (t−λ)r divides p(t). The following two properties hold:
- Each of the r sequences satisfies the recurrence relation.
- Any sequence satisfying the recurrence relation can be written uniquely as a linear combination of solutions constructed in part 1 as λ varies over all distinct roots of p(t).
As a result of this theorem a linear homogeneous recurrence relation with constant coefficients can be solved in the following manner:
- Find the characteristic polynomial p(t).
- Find the roots of p(t) counting multiplicity.
- Write an as a linear combination of all the roots (counting multiplicity as shown in the theorem above) with unknown coefficients bi.
- This is the general solution to the original recurrence relation. (q is the multiplicity of λ*)
- 4. Equate each from part 3 (plugging in n = 0, ..., d into the general solution of the recurrence relation) with the known values from the original recurrence relation. However, the values an from the original recurrence relation used do not usually have to be contiguous: excluding exceptional cases, just d of them are needed (i.e., for an original linear homogeneous recurrence relation of order 3 one could use the values a0, a1, a4). This process will produce a linear system of d equations with d unknowns. Solving these equations for the unknown coefficients of the general solution and plugging these values back into the general solution will produce the particular solution to the original recurrence relation that fits the original recurrence relation's initial conditions (as well as all subsequent values of the original recurrence relation).
The method for solving linear differential equations is similar to the method above—the "intelligent guess" (ansatz) for linear differential equations with constant coefficients is eλx where λ is a complex number that is determined by substituting the guess into the differential equation.
This is not a coincidence. Considering the Taylor series of the solution to a linear differential equation:
it can be seen that the coefficients of the series are given by the nth derivative of f(x) evaluated at the point a. The differential equation provides a linear difference equation relating these coefficients.
This equivalence can be used to quickly solve for the recurrence relationship for the coefficients in the power series solution of a linear differential equation.
The rule of thumb (for equations in which the polynomial multiplying the first term is non-zero at zero) is that:
and more generally
Example: The recurrence relationship for the Taylor series coefficients of the equation:
is given by
This example shows how problems generally solved using the power series solution method taught in normal differential equation classes can be solved in a much easier way.
Example: The differential equation
The conversion of the differential equation to a difference equation of the Taylor coefficients is
It is easy to see that the nth derivative of eax evaluated at 0 is an
Solving non-homogeneous recurrence relations
If the recurrence is inhomogeneous, a particular solution can be found by the method of undetermined coefficients and the solution is the sum of the solution of the homogeneous and the particular solutions. Another method to solve an inhomogeneous recurrence is the method of symbolic differentiation. For example, consider the following recurrence:
This is an inhomogeneous recurrence. If we substitute n ↦ n+1, we obtain the recurrence
Subtracting the original recurrence from this equation yields
This is a homogeneous recurrence, which can be solved by the methods explained above. In general, if a linear recurrence has the form
where are constant coefficients and p(n) is the inhomogeneity, then if p(n) is a polynomial with degree r, then this inhomogeneous recurrence can be reduced to a homogeneous recurrence by applying the method of symbolic differencing r times.
is the generating function of the inhomogeneity, the generating function
of the inhomogeneous recurrence
with constant coefficients ci is derived from
If P(x) is a rational generating function, A(x) is also one. The case discussed above, where pn = K is a constant, emerges as one example of this formula, with P(x) = K/(1−x). Another example, the recurrence with linear inhomogeneity, arises in the definition of the schizophrenic numbers. The solution of homogeneous recurrences is incorporated as p = P = 0.
Solving first-order non-homogeneous recurrence relations with variable coefficients
Moreover, for the general first-order linear inhomogeneous recurrence relation with variable coefficients:
there is also a nice method to solve it:
General linear homogeneous recurrence relations
Many linear homogeneous recurrence relations may be solved by means of the generalized hypergeometric series. Special cases of these lead to recurrence relations for the orthogonal polynomials, and many special functions. For example, the solution to
is given by
the Bessel function, while
is solved by
Solving a first order rational difference equation
A first order rational difference equation has the form . Such an equation can be solved by writing as a nonlinear transformation of another variable which itself evolves linearly. Then standard methods can be used to solve the linear difference equation in .
Stability of linear higher-order recurrences
The linear recurrence of order d,
has the characteristic equation
The recurrence is stable, meaning that the iterates converge asymptotically to a fixed value, if and only if the eigenvalues (i.e., the roots of the characteristic equation), whether real or complex, are all less than unity in absolute value.
Stability of linear first-order matrix recurrences
In the first-order matrix difference equation
with state vector x and transition matrix A, x converges asymptotically to the steady state vector x* if and only if all eigenvalues of the transition matrix A (whether real or complex) have an absolute value which is less than 1.
Stability of nonlinear first-order recurrences
Consider the nonlinear first-order recurrence
This recurrence is locally stable, meaning that it converges to a fixed point x* from points sufficiently close to x*, if the slope of f in the neighborhood of x* is smaller than unity in absolute value: that is,
A nonlinear recurrence could have multiple fixed points, in which case some fixed points may be locally stable and others locally unstable; for continuous f two adjacent fixed points cannot both be locally stable.
A nonlinear recurrence relation could also have a cycle of period k for k > 1. Such a cycle is stable, meaning that it attracts a set of initial conditions of positive measure, if the composite function
with f appearing k times is locally stable according to the same criterion:
where x* is any point on the cycle.
In a chaotic recurrence relation, the variable x stays in a bounded region but never converges to a fixed point or an attracting cycle; any fixed points or cycles of the equation are unstable. See also logistic map, dyadic transformation, and tent map.
Relationship to differential equations
with Euler's method and a step size h, one calculates the values
by the recurrence
Systems of linear first order differential equations can be discretized exactly analytically using the methods shown in the discretization article.
Some of the best-known difference equations have their origins in the attempt to model population dynamics. For example, the Fibonacci numbers were once used as a model for the growth of a rabbit population.
The logistic map is used either directly to model population growth, or as a starting point for more detailed models. In this context, coupled difference equations are often used to model the interaction of two or more populations. For example, the Nicholson-Bailey model for a host-parasite interaction is given by
with Nt representing the hosts, and Pt the parasites, at time t.
Recurrence relations are also of fundamental importance in analysis of algorithms. If an algorithm is designed so that it will break a problem into smaller subproblems (divide and conquer), its running time is described by a recurrence relation.
A simple example is the time an algorithm takes to find an element in an ordered vector with elements, in the worst case.
A naive algorithm will search from left to right, one element at a time. The worst possible scenario is when the required element is the last, so the number of comparisons is .
A better algorithm is called binary search. However, it requires a sorted vector. It will first check if the element is at the middle of the vector. If not, then it will check if the middle element is greater or lesser than the sought element. At this point, half of the vector can be discarded, and the algorithm can be run again on the other half. The number of comparisons will be given by
which will be close to .
Digital signal processing
In digital signal processing, recurrence relations can model feedback in a system, where outputs at one time become inputs for future time. They thus arise in infinite impulse response (IIR) digital filters.
For example, the equation for a "feedforward" IIR comb filter of delay T is:
Where is the input at time t, is the output at time t, and α controls how much of the delayed signal is fed back into the output. From this we can see that
Recurrence relations, especially linear recurrence relations, are used extensively in both theoretical and empirical economics. In particular, in macroeconomics one might develop a model of various broad sectors of the economy (the financial sector, the goods sector, the labor market, etc.) in which some agents' actions depend on lagged variables. The model would then be solved for current values of key variables (interest rate, real GDP, etc.) in terms of exogenous variables and lagged endogenous variables. See also time series analysis.
- Holonomic sequences
- Iterated function
- Matrix difference equation
- Orthogonal polynomials
- Recursion (computer science)
- Lagged Fibonacci generator
- Master theorem
- Circle points segments proof
- Continued fraction
- Time scale calculus
- Integrodifference equation
- Combinatorial principles
- Infinite impulse response
- Greene, Daniel H.; Knuth, Donald E. (1982), "2.1.1 Constant coefficients – A) Homogeneous equations", Mathematics for the Analysis of Algorithms (2nd ed.), Birkhäuser, p. 17.
- Martino, Ivan; Martino, Luca (2013-11-14). "On the variety of linear recurrences and numerical semigroups". Semigroup Forum 88 (3): 569–574. doi:10.1007/s00233-013-9551-2. ISSN 0037-1912.
- Partial difference equations, Sui Sun Cheng, CRC Press, 2003, ISBN 978-0-415-29884-1
- Chiang, Alpha C., Fundamental Methods of Mathematical Economics, third edition, McGraw-Hill, 1984.
- Papanicolaou, Vassilis, "On the asymptotic stability of a class of linear difference equations," Mathematics Magazine 69(1), February 1996, 34–43.
- Maurer, Stephen B.; Ralston, Anthony (1998), Discrete Algorithmic Mathematics (2nd ed.), A K Peters, p. 609, ISBN 9781568810911.
- Cormen, T. et al, Introduction to Algorithms, MIT Press, 2009
- R. Sedgewick, F. Flajolet, An Introduction to the Analysis of Algorithms, Addison-Wesley, 2013
- Sargent, Thomas J., Dynamic Macroeconomic Theory, Harvard University Press, 1987.
- Batchelder, Paul M. (1967). An introduction to linear difference equations. Dover Publications.
- Miller, Kenneth S. (1968). Linear difference equations. W. A. Benjamin.
- Fillmore, Jay P.; Marx, Morris L. (1968). "Linear recursive sequences". SIAM Rev. 10 (3). pp. 324–353. JSTOR 2027658.
- Brousseau, Alfred (1971). Linear Recursion and Fibonacci Sequences. Fibonacci Association.
- Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 1990. ISBN 0-262-03293-7. Chapter 4: Recurrences, pp. 62–90.
- Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). Concrete Mathematics: A Foundation for Computer Science (2 ed.). Addison-Welsey. ISBN 0-201-55802-5.
- Enders, Walter (2010). Applied Econometric Times Series (3 ed.).
- Cull, Paul; Flahive, Mary; Robson, Robbie (2005). Difference Equations: From Rabbits to Chaos. Springer. ISBN 0-387-23234-6. chapter 7.
- Jacques, Ian (2006). Mathematics for Economics and Business (Fifth ed.). Prentice Hall. pp. 551–568. ISBN 0-273-70195-9. Chapter 9.1: Difference Equations.
- Minh, Tang; Van To, Tan (2006). "Using generating functions to solve linear inhomogeneous recurrence equations" (PDF). Proc. Int. Conf. Simulation, Modelling and Optimization, SMO'06. pp. 399–404.
- Polyanin, Andrei D. "Difference and Functional Equations: Exact Solutions". at EqWorld - The World of Mathematical Equations.
- Polyanin, Andrei D. "Difference and Functional Equations: Methods". at EqWorld - The World of Mathematical Equations.
- Wang, Xiang-Sheng; Wong, Roderick (2012). "Asymptotics of orthogonal polynomials via recurrence relations". Anal. Appl. 10 (2): 215–235. doi:10.1142/S0219530512500108.
- Hazewinkel, Michiel, ed. (2001), "Recurrence relation", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Weisstein, Eric W., "Recurrence Equation", MathWorld.
- Mathews, John H. "Homogeneous Difference Equations".
- "OEIS Index Rec". OEIS index to a few thousand examples of linear recurrences, sorted by order (number of terms) and signature (vector of values of the constant coefficients) | https://en.wikipedia.org/wiki/Recurrence_relation |
4.21875 | In the 1980s, scientists' view of the solar system's asteroids was essentially static: Asteroids that formed near the sun remained near the sun; those that formed farther out stayed on the outskirts.
But in the last decade, astronomers have detected asteroids with compositions unexpected for their locations in space: Those that looked like they formed in warmer environments were found further out in the solar system, and vice versa. Scientists considered these objects to be anomalous "rogue" asteroids.
But now, a new map developed by researchers from MIT and the Paris Observatory charts the size, composition, and location of more than 100,000 asteroids throughout the solar system, and shows that rogue asteroids are actually more common than previously thought. Particularly in the solar system's main asteroid belt — between Mars and Jupiter — the researchers found a compositionally diverse mix of asteroids.
The new asteroid map suggests that the early solar system may have undergone dramatic changes before the planets assumed their current alignment. For instance, Jupiter may have drifted closer to the sun, dragging with it a host of asteroids that originally formed in the colder edges of the solar system, before moving back out to its current position. Jupiter's migration may have simultaneously knocked around more close-in asteroids, scattering them outward.
"It's like Jupiter bowled a strike through the asteroid belt," says Francesca DeMeo, who did much of the mapping as a postdoc in MIT's Department of Earth, Atmospheric and Planetary Sciences. "Everything that was there moves, so you have this melting pot of material coming from all over the solar system."
DeMeo says the new map will help theorists flesh out such theories of how the solar system evolved early in its history. She and Benoit Carry of the Paris Observatory have published details of the map in Nature.
From a trickle to a river
To create a comprehensive asteroid map, the researchers first analyzed data from the Sloan Digital Sky Survey, which uses a large telescope in New Mexico to take in spectral images of hundreds of thousands of galaxies. Included in the survey is data from more than 100,000 asteroids in the solar system. DeMeo grouped these asteroids by size, location, and composition. She defined this last category by asteroids' origins — whether in a warmer or colder environment — a characteristic that can be determined by whether an asteroid's surface is more reflective at redder or bluer wavelengths.
The team then had to account for any observational biases. While the survey includes more than 100,000 asteroids, these are the brightest such objects in the sky. Asteroids that are smaller and less reflective are much harder to pick out, meaning that an asteroid map based on observations may unintentionally leave out an entire population of asteroids.
To avoid any bias in their mapping, the researchers determined that the survey most likely includes every asteroid down to a diameter of five kilometers. At this size limit, they were able to produce an accurate picture of the asteroid belt. The researchers grouped the asteroids by size and composition, and mapped them into distinct regions of the solar system where the asteroids were observed.
From their map, they observed that for larger asteroids, the traditional pattern holds true: The further one gets from the sun, the colder the asteroids appear. But for smaller asteroids, this trend seems to break down. Those that look to have formed in warmer environments can be found not just close to the sun, but throughout the solar system — and asteroids that resemble colder bodies beyond Jupiter can also be found in the inner asteroid belt, closer to Mars.
As the team writes in its paper, "the trickle of asteroids discovered in unexpected locations has turned into a river. We now see that all asteroid types exist in every region of the main belt."
A shifting solar system
The compositional diversity seen in this new asteroid map may add weight to a theory of planetary migration called the Grand Tack model. This model lays out a scenario in which Jupiter, within the first few million years of the solar system's creation, migrated as close to the sun as Mars is today. During its migration, Jupiter may have moved right through the asteroid belt, scattering its contents and repopulating it with asteroids from both the inner and outer solar system before moving back out to its current position — a picture that is very different from the traditional, static view of a solar system that formed and stayed essentially in place for the past 4.5 billion years.
"That [theory] has been completely turned on its head," DeMeo says. "Today we think the absolute opposite: Everything's been moved around a lot and the solar system has been very dynamic."
DeMeo adds that the early pinballing of asteroids around the solar system may have had big impacts — literally — on Earth. For instance, colder asteroids that formed further out likely contained ice. When they were brought closer in by planetary migrations, they may have collided with Earth, leaving remnants of ice that eventually melted into water.
"The story of what the asteroid belt is telling us also relates to how Earth developed water, and how it stayed in this Goldilocks region of habitability today," DeMeo says.
Jennifer Chu | EurekAlert!
Tiniest Particles Shrink Before Exploding When Hit With SLAC's X-ray Laser
05.02.2016 | Tohoku University
Scientists create new state of matter: Quantum gas, liquid and crystal all-in-one
02.02.2016 | Universität Stuttgart
Automobiles increase the mobility of their users. However, their maneuverability is pushed to the limit by cramped inner city conditions. Those who need to...
Advance in biomedical imaging: The University of Würzburg's Biocenter has enhanced fluorescence microscopy to label and visualise up to nine different cell structures simultaneously.
Fluorescence microscopy allows researchers to visualise biomolecules in cells. They label the molecules using fluorescent probes, excite them with light and...
NASA's follow-on to the successful ICESat mission will employ a never-before-flown technique for determining the topography of ice sheets and the thickness of sea ice, but that won't be the only first for this mission.
Slated for launch in 2018, NASA's Ice, Cloud and land Elevation Satellite-2 (ICESat-2) also will carry a 3-D printed part made of polyetherketoneketone (PEKK),...
In the last decades, sea level has been rising continuously – about 3.3 mm per year. For reef islands such as the Maldives or the Marshall Islands a sinister picture is being painted evoking the demise of the island states and their cultures. Are the effects of sea-level rise already noticeable on reef islands? Scientists from the ZMT have now answered this question for the Takuu Atoll, a group of Pacific islands, located northeast of Papua New Guinea.
In the last decades, sea level has been rising continuously – about 3.3 mm per year. For reef islands such as the Maldives or the Marshall Islands a sinister...
The ‘Internet of Things’ is growing rapidly. Mobile phones, washing machines and the milk bottle in the fridge: the idea is that minicomputers connected to these will be able to process information, receive and send data. This requires electrical power. Transistors that are capable of switching information with a single electron use far less power than field effect transistors that are commonly used in computers. However, these innovative electronic switches do not yet work at room temperature. Scientists working on the new EU research project ‘Ions4Set’ intend to change this. The program will be launched on February 1. It is coordinated by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR).
“Billions of tiny computers will in future communicate with each other via the Internet or locally. Yet power consumption currently remains a great obstacle”,...
02.02.2016 | Event News
26.01.2016 | Event News
26.01.2016 | Event News
05.02.2016 | Life Sciences
05.02.2016 | Materials Sciences
05.02.2016 | Physics and Astronomy | http://www.innovations-report.com/html/reports/physics-astronomy/039-rogue-039-asteroids-norm-225571.html |
4.40625 | Python Reference cs101: Unit 3 (includes Units 1-‐2) Arithmetic Expressions addition: <Number> + <Number> outputs the sum of the two input numbers multiplication: <Number> * <Number> outputs the product of the two input numbers subtraction: <Number> -‐ <Number> outputs the difference between the two input numbers division: <Number> / <Number> outputs the result of dividing the first number by the second Note: if both numbers are whole numbers, the result is truncated to just the whole number part. modulo: <Number> % <Number> outputs the remainder of dividing the first number by the second exponentiation: <Base> ** <Power> outputs the result of raising <Base> to the <Power> power (multiplying <Base> by itself <Power> number of times). Comparisons equality: <Value> == <Value> outputs True if the two input values are equal, False otherwise inequality: <Value> != <Value> outputs True if the two input values are not equal, False otherwise greater than: <Number1> > <Number2> outputs True if Number1 is greater than Number2 less than: <Number1> < <Number2> outputs True if Number1 is less than than Number2 greater or equal to: <Number1> >= <Number2> outputs True if Number1 is not less than Number2 less than or equal to: <Number1> <= <Number2> outputs True if Number1 is not greater than Number2
Variables and Assignment Names A <Name> in Python can be any sequence of letters, numbers, and underscores (_) that does not start with a number. We usually use all lowercase letters for variable names, but capitalization must match exactly. Here are some valid examples of names in Python (but most of these would not be good choices to actually use in your programs): my_name one2one Dorina this_is_a_very_long_variable_name Assignment Statement An assignment statement assigns a value to a variable: <Name> = <Expression> After the assignment statement, the variable <Name> refers to the value of the <Expression> on the right side of the assignment. An <Expression> is any Python construct that has a value. Multiple Assignment We can put more than one name on the left side of an assignment statement, and a corresponding number of expressions on the right side: <Name1>, <Name2>, ... = <Expression1>, <Expression2>, ... All of the expressions on the right side are evaluated first. Then, each name on the left side is assigned to reference the value of the corresponding expression on the right side. This is handy for swapping variable values. For example, s, t = t, s would swap the values of s and t so after the assignment statement s now refers to the previous value of t, and t refers to the previous value of s. Note: what is really going on here is a bit different. The multiple values are packed in a tuple (which is similar to the list data type introduced in Unit 3, but an immutable version of a list), and then unpacked into its components when there are multiple names on the left side. This distinction is not important for what we do in cs101, but does become important in some contexts.
Strings A string is sequence of characters surrounded by quotes. The quotes can be either single or double quotes, but the quotes at both ends of the string must be the same type. Here are some examples of strings in Python: "silly" string "Im a valid string, even with a single quote in the middle!" String Concatenation We can use the + operator on strings, but it has a different meaning than when it is used on numbers. string concatenation: <String> + <String> outputs the concatenation of the two input strings (pasting the string together with no space between them) We can also use the multiplication operator on strings: string multiplication: <String> * <Number> outputs a string that is <Number> copies of the input <String> pasted together Indexing Strings The indexing operator provides a way to extract subsequences of characters from a string. string indexing: <String>[<Number>] outputs a single-‐character string containing the character at position <Number> of the input <String>. Positions in the string are counted starting from 0, so s would output the second character in s. If the <Number> is negative, positions are counted from the end of the string: s[-‐1] is the last character in s. string extraction: <String>[<Start Number>:<Stop Number>] outputs a string that is the subsequence of the input string starting from position <Start Number> and ending just before position <Stop Number>. If <Start Number> is missing, starts from the beginning of the input string; if <Stop Number> is missing, goes to the end of the input string. Length length: len(<String>) Outputs the number of characters in <String>
find The find method provides a way to find sub-‐sequences of characters in strings. find: <Search String>.find(<Target String>) outputs a number giving the position in <Search String> where <Target String> first appears. If there is no occurrence of <Target String> in <Search String>, outputs -‐1. To find later occurrences, we can also pass in a number to find: find after: <Search String>.find(<Target String>, <Start Number>) outputs a number giving the position in <Search String> where <Target String> first appears that is at or after the position give by <Start Number>. If there is no occurrence of <Target String> in <Search String> at or after <Start Number>, outputs -‐1. str (Introduced in the last homework question, not in the lecture.) str: str(<Number>) outputs a string that represents the input number. For example, str(23) outputs the string 23. Procedures A procedure takes inputs and produces outputs. It is an abstraction that provides a way to use the same code to operate on different data by passing in that data as its inputs. Defining a procedure: def <Name>(<Parameters>): <Block> The <Parameters> are the inputs to the procedure. There is one <Name> for each input in order, separated by commas. There can be any number of parameters (including none). To produce outputs: return <Expression>, <Expression>, … There can be any number of expressions following the return (including none, in which case the output of the procedure is the special value None).
Using a procedure: <Procedure>(<Input>, <Input>, …) The number of inputs must match the number of parameters. The value of each input is assigned to the value of each parameter name in order, and then the block is evaluated. If Statements The if statement provides a way to control what code executes based on the result of a test expression. if <TestExpression>: <Block> The code in <Block> only executes if the <TestExpression> has a True value. Alternate clauses. We can use an else clause in an if statement to provide code that will run when the <TestExpression> has a False value. if <TestExpression>: <BlockTrue> else: <BlockFalse> Logical Operators The and and or operators behave similarly to logical conjunction (and) and disjunction (or). The important property they have which is different from other operators is that the second operand expression is evaluated only when necessary. <Expression1> and <Expression2> If Expression1 has a False value, the result is False and Expression2 is not evaluated (so even if it would produce an error it does not matter). If Expression1 has a True value, the result of the and is the value of Expression2. <Expression1> or <Expression2> If Expression1 has a True value, the result is True and Expression2 is not evaluated (so even if it would produce an error it does not matter). If Expression1 has a False value, the result of the or is the value of Expression2.
While Loops A while loop provides a way to keep executing a block of code as long as a test expression is True. while <TestExpression>: <Block> If the <TestExpression> evaluates to False, the while loop is done and execution continues with the following statement. If the <TestExpression> evaluates to True, the <Block> is executed. Then, the loop repeats, returning to the <TestExpression> and continuing to evaluate the <Block> as long as the <TestExpression> is True. Break Statement A break statement in the <Block> of a while loop, jumps out of the containing while loop, continuing execution at the following statement. break Lists A list is a mutable collection of objects. The elements in a list can be of any type, including other lists. Constructing a list. A list is a sequence of zero or more elements, surrounded by square brackets: [ <Element>, <Element>, … ] Selecting elements: <List>[<Number>] Outputs the value of the element in <List> at position <Number>. Elements are indexed starting from 0. Selecting sub-‐sequences: <List>[<Start> : <Stop>] Outputs a sub-‐sequence of <List> starting from position <Start>, up to (but not including) position <Stop>. Update: <List>[<Number>] = <Value> Modifies the value of the element in <List> at position <Number> to be <Value>. Length: len(<List>) Outputs the number of (top-‐level) elements in <List> Append: <List>.append(<Element>) Mutates <List> by adding <Element> to the end of the list. Concatenation: <List1> + <List2> Outputs a new list that is the elements of <List1> followed by the elements of <List2>.
Popping: <List>.pop() Mutates <List> by removing its last element. Outputs the value of that element. If there are no elements in <List>, .pop() produces an error. Finding: <List>.index(<Value>) Outputs the position of the first occurrence of an element matching <Value> in <List>. If <Value> is not found in <List>, produces an error. Membership: <Value> in <List> Outputs True if <Value> occurs in <List>. Otherwise, outputs False. Non-‐membership: <Value> not in <List> Outputs False if <Value> occurs in <List>. Otherwise, outputs True. Loops on Lists A for loop provides a way to execute a block once for each element of a list: for <Name> in <List>: <Block> The loop goes through each element of the list in turn, assigning that element to the <Name> and evaluating the <Block>. | http://www.slideshare.net/ZEEHSANANJUM/python-reference |
4.3125 | The names of shapes, or las formas in Spanish, are basic vocabulary that children learn in preschool and kindergarten. Kids can learn shapes in Spanish online with games designed to teach native speakers the shapes in Spanish that work well for introducing or reviewing the vocabulary with children learning the language. These online games are not just for the early grades. They are excellent for any elementary school student learning Spanish.
Although the games are intended to teach shapes in Spanish online, there are also repeated verb forms and other language that kids will understand and begin to learn. All of these games have native speaker audio.
Shapes in Spanish Online from Ciudad 17
Ciudad 17 has two online activities to teach shapes. The first, Aprendemos las formas, presents the shapes in Spanish. Pieces come together to form, for example, a square, and then the child hears Esto es un cuadrado. For this activity, children watch and listen.
The second activity is a game. From this menu, click on Contar, formas y colores. Children see a scene and listen to instructions. For example, they will hear Haz clic sobre los cuadrados rojos. When the player clicks on the red squares, she will hear Muy bien. Ahora vamos a contarlos. 1, 2 – hay dos cuadrados.
Ciudad 17 has a page of printable activities based on their online Spanish games. You can print a page, the scene from the game, to practice shapes in Spanish with your child. You can also reinforce other language that she heard in the game: Ahora vamos a contarlos. Hay cuarto círculos. Muy bien.
Spanish Shapes Online from Tu Discovery Kids
Tu Discovery Kids has a game called Jugando con las formas. There are four levels. Although the instructions may seem fast, the language of the game itself is appropriate for children learning Spanish.
In Nivel 1 and Nivel 2, players are shown a shape and told to click on it. In Level 1, they click on an outline of the shape, and in Level 2 they find the shape in a picture. The instructions say Haz clic sobre la figura que crees correcta. Ahora, busca un triángulo. Then, the second sentence is repeated with all the shapes: Ahora, busca un círculo.
In Level 3, players connect dots to make a shape. They see the shape and and hear the same sentence each time: Ahora, dibuja un rectángulo. Ahora, dibuja un pentágono.
In Level 4, there is less audio. Players click on different images to make a picture – the pieces just float into place. It is a good activity for reviewing the Spanish shapes if you talk about how the shape forms the basis for a new picture.
Shapes in Spanish Online from RENa
RENa is the Red Escolar Nacional of the Venezuelan government. It has lots of activities at many different levels. For all the activities, there is audio and the text also appears on screen. The activities also use a variety of natural sentence structures, an excellent feature for children learning Spanish.
This activity to learn the shapes in Spanish is called El mundo de las formas. On each screen there is a picture of a child that serves as a guide and gives directions. Advance the screens with the arrows in the lower left corner. After the welcome screen, kids will hear and see the text: Descubre los objetos con forma de círculo haciendo clic sobre el mío. Each time they click on the circle beside the guide, a circle-shaped object will appear. All of the objects will be named, so this is a good way to learn and practice common vocabulary: moneda, pelota, plato, anillo, rueda, pizza, sol.
On the screens that follow, players identify objects that are circle-shaped and then match objects. Again, for these screens there is audio to reinforce the words for the Spanish shapes. After several activities with the circle, the game moves on to a new shape. For all of the shapes, the activities have clear audio, on-screen text and use simple, natural sentences.
Children learn shapes in their first language when they are very young; however, these online games are excellent for elementary school students who are learning Spanish. They will learn the shapes in Spanish and also hear useful verbs, natural sentence structures and correct pronunciation.
You may also be interested in this post: Spanish Online Games for Preschoolers from Chile Crece Contigo | http://www.spanishplayground.net/shapes-spanish-online-games-kids/ |
4.125 | Probability is defined in mathematics in the context of discrete elements in sets. However, it can be transitioned into an analogous definition in continuous mathematics. Thirdly, it can be represented as a meld of discrete and continuous concepts as a continuous probability function.
The Discrete Context
In discrete mathematics, probability is the fractional concentration of an element in a logical set. It is the ratio of the quantity of elements of the same ID to the total number of elements in the set. If the numerator is zero, the probability is zero. The probability is never negative because the numerator is never negative and the denominator is a minimum of one. Probability reaches a maximum of one, where the set of elements is homogeneous in ID. Probability can have any value from one to zero, because the denominator can be increased to any positive integer. Thus, probability in its discrete definition is itself a continuous variable, having a range of zero to one.
A probability and its improbability are complements of one.
The Continuous Context
Probability is a fraction of a whole set of discrete elements. If, however, we define a whole as continuous, we can then define probability in a continuous context analogous to its definition in a discrete context. One example is had in identifying the area of a circle as a continuous whole and identifying segments of area using different IDs, such as in a pie chart. Another example is that in statistics where the whole is defined as the area under the normal curve. Probability is then a fraction of the area under the curve.
Melding the Discrete and Continuous as a Probability Function
The simplest set of discrete elements is that in which all the elements have the same ID. The next simplest is that in which the elements are either of two IDs, where the quantities of the elements of each ID are equal. Thus, the probability of each element is one-half and the improbability of each is one-half.
If we choose a continuous function which oscillates between two extremes, we could associate the one extreme with an ID and the other extreme with a second ID. We could view the first ID as having a probability of one at the first extreme and having a probability of zero at the other extreme. We would thus be viewing the function as a probability, which transitions continuously through the intermediate values as it cycles between a probability of one and a probability of zero, i.e. between the two IDs.
At this second extreme, the second ID has a probability of one, which is also the improbability of the first ID.
A visual example would be a rotating line segment oscillating at a constant angular velocity between a horizontal orientation as one ID and a vertical orientation as the second ID. The continuous equation for this would be the cos^2 (α). The function oscillates from the horizontal or from a probability of one at α = 0 degrees to the vertical or to a probability of zero at α = 90 degrees. At α = 180 degrees, it is horizontal again with a probability of one. The probability goes to zero at 270 degrees and back to one at α = 360 degrees. The intermediate values of the function are transient values of probability forming the cycle from one to zero to one and back again from one to zero to one.
The improbability of horizontal, namely the probability of vertical, would be the sin^2 (α). The probability of horizontal plus its improbability equals one. Thusly, cos^2 (α) + sin^2 (α) = 1.
The Flip of a Coin
The score was tied at the end of regular time in the NFC Championship Game in 2016 between the Green Bay Packers and the Arizona Cardinals. This required a coin toss between heads and tails to determine which team would receive the ball to start the overtime. However, in the first toss, the coin didn’t rotate about its diameter. The coin didn’t flip. Therefore the coin was tossed a second time.
If, rather than visualizing a line segment, we envision a coin rotating at a constant angular velocity, we wouldn’t choose horizontal vs. vertical as the probability and improbability, because we wish to distinguish one horizontal orientation, heads, from its flipped horizontal orientation, tails.
A suitable continuous function of probability, P, oscillating between a value of one or heads at the horizontal α = 0 degrees and a value of zero, or tails at the horizontal α = 180 degrees, would be
P = [(1/2) × cos (α)] + (1/2), where the angular velocity is constant.
The probability of tails is 1 – P = (1/2) – [(1/2) × cos (α)]
The probability of heads and the probability of tails are both one-half at α = 90 degrees and α = 270 degrees.
The probability of heads plus its improbability, which is the probability of tails, is one
Whether we visualize these functions, the one as oscillating between horizontal and vertical and the other as oscillating between heads and tails, the functions are waves.
We are thus visualizing a probability function as a wave oscillating continuously between a probability of one and zero. The probability is the fraction of the maximum magnitude of the wave as a function of α.
An Unrelated Meaning of Probability
We use the word, probability, to designate our lack of certitude of the truth or falsity of a proposition. This meaning of probability, reflects the quality of our human judgment, designating that judgment as personal opinion, rather than a judgment of certitude of the truth. This meaning of probability has nothing to do with mathematical probability, which is the fraction of an element in a logical set or, by extension, the fraction of a continuous whole.
Driven by our love of quantification, we frequently characterize our personal opinion as a fraction of certitude. This, however, itself is a personal or subjective judgment. A common error is to mistake this quantitative description of personal opinion to be the fractional concentration of an element in a mathematical set.
Errors Arising within Material Analogies of Probability
A common error is to identify material analogies or simulations of the mathematics of probability as characteristic of the material entities employed in the analogies. In the mathematics of probability and randomness the IDs of the elements are purely nominal, i.e. they are purely arbitrary. The probability relationships of a set of six elements consisting of three elephants, two saxophones and one electron are identical to those of a set of three watermelons, two paperclips and one marble. This is so because the IDs are purely nominal with respect to the relationships of probability.
In analogy, the purely logical concepts of random mutation and probability are not properties inherent in material entities such as watermelons and snowflakes. This is in contrast to measureable properties, which, as the subject of science, are inherent in material entities.
The jargon employed in analogies of the mathematical concepts also leads us to confuse logical relationships among mathematical concepts with the properties of material entities. In the roll of dice we say the probability of the outcome of boxcars is 1/36. We think of the result of the roll as a material event, which becomes a probability of one or zero after the roll, while it was a probability of 1/36 prior to the roll of the dice. In fact, the outcome of the roll had nothing to do with probability and everything to do with the forces to which the dice were subjected in being rolled. The analogy to mathematical probability is just that, a visual simulation of purely logical relationships.
We are also tempted to think of the probability 1/36 as the potential of boxcars to come into existence, which after the roll is now in existence at an actuality of one, or non-existent as a probability of zero. In this, we confuse thought with reality. Probability relationships are solely logical relationships among purely logical elements designated by nominal IDs. Material relationships are those among real entities, whose natures determine their properties as potential and as in act.
In quantum mechanics, it is useful to treat energy as continuous waves in some instances and as discrete quanta in others. It is useful to view the wave as a probability function and the detection or lack of detection of a quantum of energy as the probability function’s collapse into a probability of one or of zero, respectively.
As an aid to illustrate the relationship of a probability function as a wave and its outcome as one or zero in quantum mechanics, Physicist, Stephen Barr, proposed the analogy,
“This is where the problem begins. It is a paradoxical (but entirely logical) fact that a probability only makes sense if it is the probability of something definite. For example, to say that Jane has a 70% chance of passing the French exam only means something if at some point she takes the exam and gets a definite grade. At that point, the probability of her passing no longer remains 70%, but suddenly jumps to 100% (if she passes) or 0% (if she fails). In other words, probabilities of events that lie in between 0 and 100% must at some point jump to 0 or 100% or else they meant nothing in the first place.”
Problems with the Illustration
The illustration fails to distinguish the purely logical relationships of mathematical probability from the existential relationships among the measurable properties of material entities. The illustration identifies probabilities as being of events rather than identifying probabilities as logical relationships among purely logical entities designated by nominal IDs. It claims that probability must transition from potency to act or it is undefined. In contrast, probability is the fractional concentration of an element in a logical set. The definition has nothing to do with real entities, whose natures have potency and are expressed in act.
Another fault of the illustration is that it is not an illustration of mathematical probability, but an illustration of probability in the sense of personal opinion. Some unidentified individual is of the opinion that Jane will probably pass the French exam. The unidentified individual lacks human certitude of the truth of the proposition that Jane will pass and uses a tag of 70% to express his personal opinion in a more colorful manner.
It is a serious error to pick an example of personal opinion to illustrate a wave function, viewed as a probability function. A wave function, such as that associated with the flip of a coin oscillating between heads as a probability of one and tails as a probability of zero, would have served the purpose well.
Of course, a wave, viewed as a probability function, is not the probability of an event. It is the continuous variable, probability, whose value oscillates between one and zero, and as such assumes these and the intermediate values of probability transiently. The additional condition is that when the oscillation is arrested, the wave collapses to either of the discrete values, one and zero, the presence or absence of a quantum. The collapse is the transition of logical state from one of continuity to one of discreteness. | https://theyhavenowine.wordpress.com/ |
4.34375 | The phonology of the Ojibwe language (also Ojibwa, Ojibway, or Chippewa, and most commonly referred to in the language as Anishinaabemowin) varies from dialect to dialect, but all varieties share common features. Ojibwe is an indigenous language of the Algonquian language family spoken in Canada and the United States in the areas surrounding the Great Lakes, and westward onto the northern plains in both countries, as well as in northeastern Ontario and northwestern Quebec. The article on Ojibwe dialects discusses linguistic variation in more detail, and contains links to separate articles on each dialect. There is no standard language and no dialect that is accepted as representing a standard. Ojibwe words in this article are written in the practical orthography commonly known as the Double vowel system.
Ojibwe dialects have the same phonological inventory of vowels and consonants with minor variations, but some dialects differ considerably along a number of phonological parameters. For example, the Ottawa and Eastern Ojibwe dialects have changed relative to other dialects by adding a process of vowel syncope that deletes short vowels in specified positions within a word.
This article primarily uses examples from the Southwestern Ojibwe dialect spoken in Minnesota and Wisconsin, sometimes also known as Ojibwemowin.
All dialects of Ojibwe have seven oral vowels. Vowel length is phonologically contrastive, hence phonemic. Although the long and short vowels are phonetically distinguished by vowel quality, recognition of vowel length in phonological representations is required, as the distinction between long and short vowels is essential for the operation of the metrical rule of vowel syncope that characterizes the Ottawa and Eastern Ojibwe dialects, as well as for the rules that determine word stress. There are three short vowels, /i a o/; and three corresponding long vowels, /iː aː oː/, in addition to a fourth long vowel /eː/, which lacks a corresponding short vowel. The short vowel /i/ typically has phonetic values centering on [ɪ]; /a/ typically has values centering on [ə]~[ʌ]; and /o/ typically has values centering on [o]~[ʊ]. Long /oː/ is pronounced [uː] for many speakers, and /eː/ is for many [ɛː].
|/a/||ɑ a ɐ ɔ ʌ ɨ||ə~ʌ a ɨ ɔ|
|/i/||i ɪ ɨ||ɨ ə ɪ ɛ|
|/o/||o~ʊ ɔ||o ɨ ə ʊ|
but more generally as
Ojibwe has a series of three short oral vowels and four long ones. The two series are characterized by both length and quality differences. The short vowels are /ɪ o ə/ (roughly the vowels in American English bit, bot, and but, respectively) and the long vowels are /iː oː aː eː/ (roughly as in American English beet, boat, ball, and bay respectively). In the Minnesota variety of Southwestern Ojibwe language, /o/ varies between [o] and [ʊ] and /oo/ varies between [oː] and [uː]. /eː/ also may be pronounced [ɛː] and /ə/ as [ʌ].
Ojibwe has nasal vowels; some arise predictably by rule in all analyses, and other long nasal vowels are of uncertain phonological status. The latter have been analysed both as underlying phonemes, and also as predictable, that is derived by the operation of phonological rules from sequences of a long vowel followed by /n/ and another segment, typically /j/.
The long nasal vowels are iinh ([ĩː]), enh ([ẽː]), aanh ([ãː]), and oonh ([õː]). They most commonly occur in the final syllable of nouns with diminutive suffixes or words with a diminutive connotation. In the Ottawa dialect long nasal aanh ([ãː]) occurs as well as in the suffix (y)aanh ([-((j)ãː]) marking the first person (conjunct) animate intransitive. Typical examples from Southwestern Ojibwe include: -iijikiwenh- ('brother'), -noshenh- ('cross-aunt'), -oozhishenh- ('grandchild') bineshiinh ('bird'), asabikeshiinh ('spider'), and awesiinh ('wild animal').
Orthographically the long vowel is followed by word-final ⟨nh⟩ to indicate that the vowel is nasal; while ⟨n⟩ is a common indicator of nasality in many languages such as French, the use of ⟨h⟩ is an orthographic convention and does not correspond to an independent sound.
One analysis of the Ottawa dialect treats the long nasal vowels as phonemic, while another treats them as derived from sequences of long vowel followed by /n/ and underlying /h/; the latter sound is converted to [ʔ] or deleted. Other discussions of the issue in Ottawa are silent on the issue.
A study of the Southwestern Ojibwe (Chippewa) dialect spoken in Minnesota describes the status of the analogous vowels as unclear, noting that while the distribution of the long nasal vowels is restricted, there is a minimal pair distinguished only by the nasality of the vowel: giiwe [ɡiːweː] ('he goes home') and giiwenh [ɡiːwẽː] ('so the story goes').
Nasalized allophones of the short vowels also exist. The nasal allophones of oral vowels are derived from a short vowel followed by a nasal+fricative cluster (for example, imbanz, 'I'm singed') is [ɪmbə̃z]). For many speakers, the nasal allophones appear not only before nasal+fricative clusters, but also before all fricatives, particularly if the vowel is preceded by another nasal. E.g., for some speakers, waabooz, ('rabbit') is pronounced [waːbõːz], and for many, mooz, ('moose') is pronounced [mõːz].
|Plosive and affricate||p [pʰ]||b [p~b]||t [tʰ]||d [t~d]||ch [tʃʰ]||j [tʃ~dʒ]||k [kʰ]||g [k~ɡ]||’ [ʔ]|
|Fricative||s [sʰ]||z [s~z]||sh [ʃʰ]||zh [ʃ~ʒ]||(h [h])|
|Nasal||m [m]||n [n]|
|Approximant||y [j]||w [w]|
The "voiced/voiceless" obstruent pairs of Ojibwe vary in their realization depending on the dialect. In many dialects, they are described as having a "lenis/fortis" contrast. In this analysis, all obstruents are considered voiceless. The fortis consonants are characterised by being pronounced more strongly and are longer in duration. They often are aspirated or preaspirated. The lenis consonants are often voiced, especially between vowels, although they often tend to be voiceless at the end of words. They are pronounced less strongly and are shorter in duration, compared to the fortis ones. In some communities, the lenis/fortis distinction has been replaced with a pure voiced/voiceless one.
In some dialects of Saulteaux (Plains Ojibwe), the sounds of ⟨sh⟩ and ⟨zh⟩ have merged with ⟨s⟩ and ⟨z⟩ respectively. This means that, for example, Southwestern Ojibwe wazhashk, ('muskrat') is pronounced the same as wazask in some dialects of Saulteaux. This merging creates additional consonant clusters of /sp/ and /st/ in addition to /sk/ common in all Anishinaabe dialects.
Ojibwe in general permits relatively few consonant clusters, and most are only found word-medially. The permissible ones are -sk-, -shp-, -sht-, -shk- (which can also appear word-finally), -mb-, -nd- (which can also appear word-finally),-ng- (also word-finally), -nj- (also word-finally), -nz-, -nzh- (also word-finally) and -ns- (also word-finally). Furthermore, any consonant (except w, h, or y) and some clusters can be followed by w (although not word-finally). Many dialects, however, permit far more clusters as a result of vowel syncope.
Ojibwe divides words into metrical "feet." Counting from the beginning of the word, each group of two syllables constitutes a foot; the first syllable in a foot is weak, the second strong. However, long vowels and vowels in the last syllable of a word are always strong, so if they occur in the weak slot of a foot, then they form a separate one-syllable foot, and counting resumes starting with the following vowel. The final syllable of a word is always strong as well. For example, the word bebezhigooganzhii ('horse') is divided into feet as (be)(be)(zhi-goo)(gan-zhii). The strong syllables all receive at least secondary stress. The rules that determine which syllable receives the primary stress are quite complex and many words are irregular. In general, though, the strong syllable in the third foot from the end of a word receives the primary stress.
A defining characteristic of several of the more eastern dialects is that they exhibit a great deal of vowel syncope, the deletion of vowels in certain positions within a word. In some dialects (primarily Odawa and Eastern Ojibwe), all unstressed vowels are lost (see above for a discussion of Ojibwe stress). In other dialects (such as some dialects of Central Ojibwe), short vowels in initial syllables are lost, but not in other unstressed syllables. For example, the word oshkinawe ('young man') of Algonquin and Southwestern Ojibwe (stress: oshkinawe) is shkinawe in some dialects of Central Ojibwe and shkinwe in Eastern Ojibwe and Odawa. Regular, pervasive syncope is a comparatively recent development, arising in the past eighty years or so[when?].
A common morphophonemic variation occurs in some verbs whose roots end in -n. When the root is followed by certain suffixes beginning with i or when it is word-final, the root-final -n changes to -zh (e.g., -miin-, 'to give something to someone' but gimiizhim, 'you guys give it to me'). In Ojibwe linguistics, this is indicated when writing the root with the symbol ⟨N⟩ (so the root 'to give something to someone' would be written ⟨miiN⟩). There are also some morphophonemic alternations where root-final -s changes to -sh (indicated with ⟨S⟩) and where root-final -n changes to -nzh (indicated with ⟨nN⟩).
In some dialects, obstruents become voiceless/fortis after the tense preverbs gii- (marking the past) and wii- (marking the future/desiderative). In such dialects, for example, gii-baapi ([ɡiː baːpːɪ]) ('s/he laughed') becomes [ɡiː pːaːpːɪ] (often spelled gii-paapi).
In the evolution from Proto-Algonquian to Ojibwe, the most sweeping change was the voicing of all Proto-Algonquian voiceless obstruents except when they were in clusters with *h, *ʔ, *θ, or *s (which were subsequently lost). Proto-Algonquian *r and *θ became Ojibwe /n/.
The relatively symmetrical Proto-Algonquian vowel system, *i, *i·, *e, *e·, *a, *a·, *o, *o· remained fairly intact in Ojibwe, although *e and *i merged as /ɪ/, and the short vowels, as described above, underwent a quality change as well.
Some examples of the changes at work are presented in the table below:
|*mekiθe·wa||mikiš||migizh||'to bark at'|
For illustrative purposes, chart of phonological variation between different Cree dialects of Proto-Algonquian *r have been reproduced here but for the Anishinaabe languages, with the inclusion of Swampy Cree and Atikamekw for illustrative purposes only, with corresponding Cree orthography in parentheses:
|Word for "Native person(s)"
|Word for "You"
|Swampy Cree||ON, MB, SK||n||ininiw/ininiwak ᐃᓂᓂᐤ/ᐃᓂᓂᐗᒃ||kīna ᑮᓇ|
|Oji-Cree||ON, MB||n||inini/ininiwak ᐃᓂᓂ/ᐃᓂᓂᐗᒃ||kīn ᑮᓐ|
|Ojibwe||ON, MB, SK, AB, BC, MI, WI, MN, ND, SD, MT||n||inini/ininiwag ᐃᓂᓂ/ᐃᓂᓂᐗᒃ
|Ottawa||ON, MI, OK||n||nini/ninwag
|Potawatomi||ON, WI, MI, IN, KS, OK||n||neni/nenwek
- Valentine (2001:?)
- See e.g. Rhodes (1985) for the Ottawa dialect and Nichols & Nyholm (1995) for the Southwestern Ojibwe dialect.
- Nichols (1980:6–7)
- e.g. Bloomfield (1958)
- e.g. Piggott (1980)
- Valentine (2001:185–188)
- Valentine (2001:19)
- Valentine (2001:40)
- Bloomfield (1958:7)
- Piggott (1980:110–111). Piggott's transcription of words containing long nasal vowels differs from those of Rhodes, Bloomfield, and Valentine by allowing for an optional [ʔ] after the long nasal vowel in phonetic forms.
- Rhodes (1985:xxiv)
- Nichols (1980:6)
- Nichols & Nyholm (1995:xxv)
- Redish, Laura and Orrin Lewis. "Ojibwe Pronunciation and Spelling Guide". Native-Languages.org. Retrieved 2007-08-07.
- Valentine, J. Randolph. "Consonants: Strong and Weak". Anishinaabemowin. Retrieved 2007-08-08.
- Valentine (2001:48–49)
- Nichols & Nyholm (1995:xxvii)
- Nichols & Nyholm (1995:xxvii-xxviii)
- Valentine (2001:51–55)
- Weshki-ayaad. "My own notes about stress in Ojibwe". Anishinaabemowin: Ojibwe Language. Retrieved 2007-08-07.
- Valentine (2001:55–57)
- Rhodes & Todd (1981:58)
- Valentine (2001:3)
- Nichols & Nyholm (1995:xix)
- Bloomfield, Leonard (1958), Eastern Ojibwa: Grammatical sketch, texts and word list, Ann Arbor: University of Michigan Press
- Nichols, John D. (1980). Ojibwe morphology (Thesis). Harvard University.
- Nichols, John D.; Nyholm, Earl (1995), A Concise Dictionary of Minnesota Ojibwe, Minneapolis: University of Minnesota Press, ISBN 0-8166-2427-5
- Piggott, Glyne L. (1980), Aspects of Odawa morphophonemics (Published version of PhD dissertation, University of Toronto, 1974), New York: Garland, ISBN 0-8240-4557-2
- Rhodes, Richard A. (1985), Eastern Ojibwa-Chippewa-Ottawa Dictionary, Berlin: Mouton de Gruyter, ISBN 3-11-013749-6
- Rhodes, Richard; Todd, Evelyn (1981), "Subarctic Algonquian languages", in Helm, June, The Handbook of North American Indians, 6: Subarctic, Washington, D.C.: The Smithsonian Institution, pp. 52–66, ISBN 0-16-004578-9
- Valentine, J. Randolph (2001), Nishnaabemwin Reference Grammar, Toronto: University of Toronto Press, ISBN 0-8020-4870-6
- Artuso, Christian. 1998. Noogom gaa-izhi-anishinaabemonaaniwag: Generational Difference in Algonquin. MA thesis, Department of Linguistics. University of Manitoba.
- Rand Valentine's introduction to Ojibwe
- Grammar, lessons, and dictionaries
- Correspondences of Ojibwe and Cree Sounds
- Freelang Ojibwe Dictionary — Freeware off-line dictionary, updated with additional entries every 6–10 weeks.
- Language Geek Page on Ojibwe — Syllabary fonts and keyboard emulators are also available from this site.
- Our Languages: Nakawē (Saulteaux/Plains Ojibwe at the Saskatchewan Indian Cultural Centre) | https://en.wikipedia.org/wiki/Ojibwe_phonology |
4.15625 | Latin literature, the body of writings in Latin, primarily produced during the Roman Republic and the Roman Empire, when Latin was a spoken language. When Rome fell, Latin remained the literary language of the Western medieval world until it was superseded by the Romance languages it had generated and by other modern languages. After the Renaissance the writing of Latin was increasingly confined to the narrow limits of certain ecclesiastical and academic publications. This article focuses primarily on ancient Latin literature. It does, however, provide a broad overview of the literary works produced in Latin by European writers during the Middle Ages and Renaissance.
Literature in Latin began as translation from the Greek, a fact that conditioned its development. Latin authors used earlier writers as sources of stock themes and motifs, at their best using their relationship to tradition to produce a new species of originality. They were more distinguished as verbal artists than as thinkers; the finest of them have a superb command of concrete detail and vivid illustration. Their noblest ideal was humanitas, a blend of culture and kindliness, approximating the quality of being “civilized.”
Little need be said of the preliterary period. Hellenistic influence came from the south, Etrusco-Hellenic from the north. Improvised farce, with stock characters in masks, may have been a native invention from the Campania region (the countryside of modern Naples). The historian Livy traced quasi-dramatic satura (medley) to the Etruscans. The statesman-writer Cato and the scholar Varro said that in former times the praises of heroes were sung after feasts, sometimes to the accompaniment of the flute, which was perhaps an Etruscan custom. If they existed, these carmina convivalia, or festal songs, would be behind some of the legends that came down to Livy. There were also the rude verses improvised at harvest festivals and weddings and liturgical formulas, whose scanty remains show alliteration and assonance. The nearest approach to literature must have been in public and private records and in recorded speeches.
The ground for Roman literature was prepared by an influx from the early 3rd century bc onward of Greek slaves, some of whom were put to tutoring young Roman nobles. Among them was Livius Andronicus, who was later freed and who is considered to be the first Latin writer. In 240 bc, to celebrate Rome’s victory over Carthage, he composed a genuine drama adapted from the Greek. His success established a tradition of performing such plays alongside the cruder native entertainments. He also made a translation of the Odyssey. For his plays Livius adapted the Greek metres to suit the Latin tongue; but for his Odyssey he retained a traditional Italian measure, as did Gnaeus Naevius for his epic on the First Punic War against Carthage. Scholars are uncertain as to how much this metre depended on quantity or stress. A half-Greek Calabrian called Ennius adopted and Latinized the Greek hexameter for his epic Annales, thus further acquainting Rome with the Hellenistic world. Unfortunately his work survives only in fragments.
The Greek character thus imposed on literature made it more a preserve of the educated elite. In Rome, coteries emerged such as that formed around the Roman consul and general Scipio Aemilianus. This circle included the statesman-orator Gaius Laelius, the Greek Stoic philosopher Panaetius, the Greek historian Polybius, the satirist Lucilius, and an African-born slave of genius, the comic playwright Terence. Soon after Rome absorbed Greece as a Roman province, Greek became a second language to educated Romans. Early in the 1st century bc, however, Latin declamation established itself, and, borrowing from Greek, it attained polish and artistry.
Plautus, the leading poet of comedy, is one of the chief sources for colloquial Latin. Ennius sought to heighten epic and tragic diction, and from his time onward, with a few exceptions, literary language became ever more divorced from that of the people, until the 2nd century ad.
Golden Age, 70 bc–ad 18
The Golden Age of Latin literature spanned the last years of the republic and the virtual establishment of the Roman Empire under the reign of Augustus (27 bc–ad 14). The first part of this period, from 70 to 42 bc, is justly called the Ciceronian. It produced writers of distinction, most of them also men of action, among whom Julius Caesar stands out. The most prolific was Varro, “most learned of the Romans,” but it was Cicero, a statesman, orator, poet, critic, and philosopher, who developed the Latin language to express abstract and complicated thought with clarity. Subsequently, prose style was either a reaction against, or a return to, Cicero’s. As a poet, although uninspired, he was technically skillful. He edited the De rerum natura of the philosophical poet Lucretius. Like Lucretius, he admired Ennius and the old Roman poetry and, though apparently interested in Hellenistic work, spoke ironically of its extreme champions, the neōteroi (“newer poets”).
After the destruction of Carthage and Corinth in 146 bc, prosperity and external security had allowed the cultivation of a literature of self-expression and entertainment. In this climate flourished the neōteroi, largely non-Roman Italians from the north, who introduced the mentality of “art for art’s sake.” None is known at first hand except Catullus, who was from Verona. These poets reacted against the grandiose—the Ennian tradition of “gravity”—and their complicated allusive poetry consciously emulated the Callimacheans of 3rd-century Alexandria. The Neoteric influence persisted into the next generation through Cornelius Gallus to Virgil.
Virgil, born near Mantua and schooled at Cremona and Milan, chose Theocritus as his first model. The self-consciously beautiful cadences of the Eclogues depict shepherds living in a landscape half real, half fantastic; these allusive poems hover between the actual and the artificial. They are shot through with topical allusions, and in the fourth he already appears as a national prophet. Virgil was drawn into the circle being formed by Maecenas, Augustus’ chief minister. In 38 bc he and Varius introduced the young poet Horace to Maecenas; and by the final victory of Augustus in 30 bc, the circle was consolidated.
With the reign of Augustus began the second phase of the Golden Age, known as the Augustan Age. It gave encouragement to the classical notion that a writer should not so much try to say new things as to say old things better. The rhetorical figures of thought and speech were mastered until they became instinctive. Alliteration and onomatopoeia (accommodation of sound and rhythm to sense), previously overdone by the Ennians and therefore eschewed by the neōteroi, were now used effectively with due discretion. Perfection of form characterizes the odes of Horace; elegy, too, became more polished.
The decade of the first impetus of Augustanism, 29–19 bc, saw the publication of Virgil’s Georgics and the composition of the whole Aeneid by his death in 19 bc; Horace’s Odes, books I–III, and Epistles, book I; in elegy, books I–III of Propertius (also of Maecenas’ circle) and books I–II of Tibullus, with others from the circle of Marcus Valerius Messalla Corvinus, and doubtless the first recitations by a still younger member of his circle, Ovid. About 28 or 27 bc Livy began his monumental history.
Maecenas’ circle was not a propaganda bureau; his talent for tactful pressure guided his poets toward praise of Augustus and the regime without excessively cramping their freedom. Propertius, when admitted to the circle, was simply a youth with an anti-Caesarian background who had gained favour with passionate love elegies. He and Horace quarreled, and after Virgil’s death the group broke up. Would-be poets now abounded, such as Horace’s protégés, who occur in the Epistles; Ovid’s friends, whom he remembers wistfully in exile; and Manilius, whom no one mentions at all. Poems were recited in literary circles and in public, hence the importance attached to euphony, smoothness, and artistic structure. They thus became known piecemeal and might be improved by friendly suggestions. When finally they were assembled in books, great care was taken over arrangement, which was artistic or significant (but not chronological).
Meanwhile, in prose the Ciceronian climax had been followed by a reaction led by Sallust. In 43 bc he began to publish a series of historical works in a terse, epigrammatic style studded with archaisms and avoiding the copiousness of Cicero. Later, eloquence, deprived of political influence, migrated from the forum to the schools, where cleverness and point counted rather than rolling periods. Thus developed the epigrammatic style of the younger Seneca and, ultimately, of Tacitus. Spreading to verse, it conditioned the witty couplets of Ovid, the tragedies of Seneca, and the satire of Juvenal. Though Livy stood out, Ciceronianism only found a real champion again in the rhetorician Quintilian.
Silver Age, ad 18–133
After the first flush of enthusiasm for Augustan ideals of national regeneration, literature paid the price of political patronage. It became subtly sterilized; and Ovid was but the first of many writers actually suppressed or inhibited by fear. Only Tacitus and Juvenal, writing under comparatively tolerant emperors, turned emotions pent up under Domitian’s reign of terror into the driving force of great literature. Late Augustans such as Livy already sensed that Rome had passed its summit. Yet the title of Silver Age is not undeserved by a period that produced, in addition to Tacitus and Juvenal, the two Senecas, Lucan, Persius, the two Plinys, Quintilian, Petronius, Statius, Martial, and, of lesser stature, Manilius, Valerius Flaccus, Silius Italicus, and Suetonius.
The decentralization of the empire under Hadrian and the Antonines weakened the Roman pride and passion for liberty. Romans began again to write in Greek as well as Latin. The “new sophistic” movement in Greece affected the “novel poets” such as Florus. An effete culture devoted itself to philology, archaism, and preciosity. After Juvenal, 250 years elapsed before Ausonius of Bordeaux (4th century ad) and the last of the true classics, Claudian (flourished about 400), appeared. The anonymous Pervigilium Veneris (“Vigil of Venus”), of uncertain date, presages the Middle Ages in its vitality and touch of stressed metre. Ausonius, though in the pagan literary tradition, was a Christian and contemporary with a truly original Christian poet, the Spaniard Prudentius. Henceforward, Christian literature overlaps pagan and generally surpasses it.
In prose these centuries have somewhat more to boast, though the greatest work by a Roman was written in Greek, the Meditations of the emperor Marcus Aurelius. Elocutio novella, a blend of archaisms and colloquial speech, is seen to best advantage in Apuleius (born about 125). Other writers of note were Aulus Gellius and Macrobius. The 4th century ad was the age of the grammarians and commentators, but in prose some of the most interesting work is again Christian.
Roman comedy was based on the New Comedy fashionable in Greece, whose classic representative was Menander. But whereas this was imitation of life to the Greeks, to the Romans it was escape to fantasy and literary convention. Livius’ successor, Naevius, who developed this “drama in Greek cloak” (fabula palliata), may have been the first to introduce recitative and song, thereby increasing its unreality. But he slipped in details of Roman life and outspoken criticisms of powerful men. His imprisonment warned comedy off topical references, but the Roman audience became alert in applying ancient lines to modern situations and in demonstrating their feelings by appropriate clamour.
Unlike his predecessors, Plautus specialized, writing only comedy involving high spirits, oaths, linguistic play, slapstick humour, music, and skillful adaptation of rhythm to subject matter. Some of his plays can be thought of almost as comic opera. Part of the humour consisted in the sudden intrusion of Roman things into this conventional Greek world. “The Plautine in Plautus” consists in pervasive qualities rather than supposed innovations of plot or technique.
As Greek influence on Roman culture increased, Roman drama became more dependent on Greek models. Terence’s comedy was very different from Plautus’. Singing almost disappeared from his plays, and recitative was less prominent. From Menander he learned to exhibit refinements of psychology and to construct ingenious plots; but he lacked comic force. His pride was refined language—the avoidance of vulgarity, obscurity, or slang. His characters were less differentiated in speech than those of Plautus, but they talk with an elegant charm. The society Terence portrayed was more sensitive than that of Plautine comedy; lovers tended to be loyal and sons obedient. His historical significance has been enhanced by the loss of nearly all of Menander’s work.
Though often revived, plays modeled on Greek drama were rarely written after Terence. The Ciceronian was the great age of acting, and in 55 bc Pompey gave Rome a permanent theatre. Plays having an Italian setting came into vogue, their framework being Greek New Comedy but their subject Roman society. A native form of farce was also revived. Under Julius Caesar, this yielded in popularity to verse mime of Greek origin that was realistic, often obscene, and full of quotable apothegms. Finally, when mime gave rise to the dumb show of the pantomimus with choral accompaniment and when exotic spectacles had become the rage, Roman comedy faded out.
Livius introduced both Greek tragedy (fabula crepidata, “buskined”) and comedy to Latin. He was followed by Naevius and Ennius, who loved Euripides. Pacuvius, probably a greater tragedian, liked Sophocles and heightened tragic diction even more than Ennius. His successor, Accius, was more rhetorical and impetuous. The fragments of these poets betoken grandeur in “the high Roman fashion,” but they also have a certain ruggedness. They did not always deal in Greek mythology: occasionally they exploited Roman legend or even recent history. The Roman chorus, unlike the Greek, performed on stage and was inextricably involved in the action.
Classical tragedy was seldom composed after Accius, though its plays were constantly revived. Writing plays, once a function of slaves and freedmen, became a pastime of aristocratic dilettantes. Such writers had commonly no thought of production: post-Augustan drama was for reading. The extant tragedies of the younger Seneca probably were not written for public performance. They are melodramas of horror and violence, marked by sensational pseudo-realism and rhetorical cleverness. Characterization is crude, and philosophical moralizing obtrusive. Yet Seneca was a model for 16th- and early 17th-century tragedy, especially in France, and influenced English revenge tragedy.
Livius’ pioneering Odyssey was, to judge from the fragments, primitive, as was the Bellum Punicum of Naevius, important for Virgil because it began with the legendary origins of Carthage in Phoenicia and Rome in Troy. But Ennius’ Annales soon followed. This compound of legendary origins and history was in Latin, in a transplanted metre, and by a poet who had imagination and a realization of the emergent greatness of Rome. In form his work must have been ill-balanced; he almost ignored the First Punic War in consideration of Naevius and became more detailed as he added books about his own times. But his great merit shines out from the fragments—nobility of ethos matched with nobility of language. On receptive spirits, such as Cicero, Lucretius, and Virgil, his influence was profound.
Little is known of the “strong epic” for which Virgil’s friend Varius is renowned, but Virgil’s Aeneid was certainly something new. Recent history would have been too particularized a theme. Instead, Virgil developed Naevius’ version of Aeneas’ pilgrimage from Troy to found Rome. The poem is in part an Odyssey of travel (with an interlude of love) followed by an Iliad of conquest, and in part a symbolic epic of contemporary Roman relevance. Aeneas has Homeric traits but also qualities that look forward to the character of the Roman hero of the future. His fault was to have lingered at Carthage. The command to leave the Carthaginian queen Dido shakes him ruthlessly out of the last great temptation to seek individual happiness. But it is only the vision of Rome’s future greatness, seen when he visits Elysium, that kindles obedient acceptance into imaginative enthusiasm. It was just such a sacrifice of the individual that the Augustan ideal demanded. The second half of the poem represents the fusing in the crucible of war of the civilized graces of Troy with the manly virtues of Italy. The tempering of Roman culture by Italian hardiness was another part of the Augustan ideal. So was a revival of interest in ancient customs and religious observances, which Virgil could appropriately indulge. The verse throughout is superbly varied, musical, and rhetorical in the best sense.
With his Hecale, Callimachus had inaugurated the short, carefully composed hexameter narrative (called epyllion by modern scholars) to replace grand epic. The Hecale had started a convention of insetting an independent story. Catullus inset the story of Ariadne on Naxos into that of the marriage of Peleus and Thetis, and the poem has a mannered, lyrical beauty. But the story of Aristaeus at the end of Virgil’s Georgics, with that of Orpheus and Eurydice inset, shows what heights epyllion could attain.
Ovid’s Metamorphoses is a nexus of some 50 epyllia with shorter episodes. He created a convincing imaginative world with a magical logic of its own. His continuous poem, meandering from the creation of the world to the apotheosis of Julius Caesar, is a great Baroque conception, executed in swift, clear hexameters. Its frequent irony and humour are striking. Thereafter epics proliferated. Statius’ Thebaid and inchoate Achilleid and Valerius’ Argonautica are justly less read now than they were. Lucan’s unfinished Pharsalia has a more interesting subject, namely the struggle between Caesar and Pompey, whom he favours. He left out the gods. His brilliant rhetoric comes close to making the poem a success, but it is too strained and monochromatic.
Ennius essayed didactic poetry in his Epicharmus, a work on the nature of the physical universe. Lucretius’ De rerum natura is an account of Epicurus’ atomic theory of matter, its aim being to free men from superstition and the fear of death. Its combination of moral urgency, intellectual force, and precise observation of the physical world makes it one of the summits of classical literature.
This poem profoundly affected Virgil, but his poetic reaction was delayed for some 17 years; and the Georgics, though deeply influenced by Lucretius, were not truly didactic. Country-bred though he was, Virgil wrote for literary readers like himself, selecting whatever would contribute picturesque detail to his impressionistic picture of rural life. The Georgics portrayed the recently united land of Italy and taught that the idle Golden Age of the fourth Eclogue was a mirage: relentless work, introduced by a paternal Jupiter to sharpen men’s wits, creates “the glory of the divine countryside.” The compensation is the infinite variety of civilized life. Insofar as it had a political intention, it encouraged revival of an agriculture devastated in wars, of the old Italian virtues, and of the idea of Rome’s extending its works over Italy and civilizing the world.
Ovid’s Ars amatoria was comedy or satire in the burlesque guise of didactic, an amusing commentary on the psychology of love. The Fasti was didactic in popularizing the new calendar; but its object was clearly to entertain.
Satura meant a medley. The word was applied to variety performances introduced, according to Livy, by the Etruscans. Literary satire begins with Ennius, but it was Lucilius who established the genre. After experimenting, he settled on hexameters, thus making them its recognized vehicle. A tendency to break into dialogue may be a vestige of a dramatic element in nonliterary satura. Lucilius used this medium for self-expression, fearlessly criticizing public as well as private conduct. He owed much to the Cynic-Stoic “diatribes” (racy sermons in prose or verse) of Greeks such as Bion; but in extant Hellenistic literature he is most clearly presaged by the fragments of Callimachus’ iambs. “Menippean” satire, which descended from the Greek prototype of Menippus of Gadara and mingled prose and verse, was introduced to Rome by Varro.
Horace saw that satire was still awaiting improvement: Lucilius had been an uncouth versifier. Satires I, 1–3 are essays in the Lucilian manner. But Horace’s nature was to laugh, not to flay, and his incidental butts were either insignificant or dead. He came to appreciate that the real point about Lucilius was not his denunciations but his self-revelation. This encouraged him to talk about himself. In Satires II he developed in parts the satire of moral diatribe presaging Juvenal. His successor Persius blended Lucilius, Horace, diatribe, and mime into pungent sermons in verse. The great declaimer was Juvenal, who fixed the idea of satire for posterity. Gone was the personal approach of Lucilius and Horace. His anger may at times have been cultivated for effect, but his epigrammatic power and brilliant eye for detail make him a great poet.
The younger Seneca’s Apocolocyntosis was a medley of prose and verse, but its pitiless skit on the deification of the emperor Claudius was Lucilian satire. The Satyricon of Petronius is also Menippean inasmuch as it contains varied digressions and occasional verse; essentially, however, it comes under fiction.
With Lucilian satire may be classed the fables of Augustus’ freedman Phaedrus, the Roman Aesop, whose beast fables include contemporary allusions.
The short poems of Catullus were called by himself nugae (“trifles”). They vary remarkably in mood and intention, and he uses iambic metre normally associated with invective not only for his abuse of Caesar and Pompey but also for his tender homecoming to Sirmio. Catullus alone used the hendecasyllable, the metre of skits and lampoons, as a medium for love poetry.
Horace was a pioneer. In his Epodes he used iambic verse to express devotion to Maecenas and for brutal invective in the manner of the Greek poet Archilochus. But his primary aim was to create literature, whereas his models had been venting their feelings. In the Odes he adapted other Greek metres and claimed immortality for introducing early Greek lyric to Latin. The Odes rarely show the passion now associated with lyric but are marked by elegance, dignity, and studied perfectionism.
The elegiac couplet of hexameter and pentameter (verse line of five feet) was taken over by Catullus, who broke with tradition by filling elegy with personal emotion. One of his most intense poems in this metre, about Lesbia, extends to 26 lines; another is a long poem of involved design in which the fabled love of Laodameia for Protesilaus is incidentally used as a paradigm. These two poems make him the inventor of the “subjective” love elegy dealing with the poet’s own passion. Gallus, whose work is lost, established the genre; Tibullus and Propertius smoothed out the metre.
Propertius’ first book is still Catullan in that it seems genuinely inspired by his passion for Cynthia: the involvement of Tibullus is less certain. Later, Propertius grew more interested in manipulating literary conventions. Tibullus’ elegy is constructed of sections of placid couplets with subtle transitions. These two poets established the convention of the “soft poet,” valiant only in the campaigns of love, immortalized through them and the Muses. Propertius was at first impervious to Augustan ideals, glorying in his abject slavery to love and his naughtiness (nequitia), though later he became acclimatized to Maecenas’ circle.
Tibullus, a lover of peace, country life, and old religious customs, had grace and quiet humour. Propertius, too, could be charming, but he was far more. He often wrote impetuously, straining language and associative sequence with passion or irony or sombre imagination.
Ovid’s aim was not to unburden his soul but to entertain. In the Amores he is outrageous and amusing in the role adopted from Propertius, his Corinna being probably a fiction. Elegy became his characteristic medium. He carried the couplet of his predecessors to its logical extreme, characterized by parallelism, regular flow and ebb, and a neat wit.
Other language and literary art forms
Speaking in the forum and law courts was the essence of a public career at Rome and hence of educational practice. After the 2nd century bc, Greek art affected Latin oratory. The dominant style in Cicero’s time was the “Asiatic”—emotional, rhythmical, and ornate. Cicero, Asiatic at first, early learned to tone down his style. Criticized later by the revivers of plain style, he insisted that style should vary with subject. But in public speaking he held that crowds were swayed less by argument than emotion. He was the acknowledged master speaker from 70 bc until his death (43 bc). He expounded the history of Roman oratory in the Brutus and his own methods in the De oratore.
The establishment of monarchy robbed eloquence of its public importance, but rhetoric remained the crown of education. Insofar as this taught boys to marshal material clearly and to express themselves cogently, it performed the function of the modern essay; but insofar as the temptations of applause made it strained and affected, it did harm.
In the De oratore, Cicero had pleaded that an orator’s training should be in all liberal arts. Education without rhetoric was inconceivable; but what Cicero was proposing was to graft onto it a complete system of higher education. Quintilian, in his Institutio oratoria, went back to Cicero for inspiration as well as style. Much of that work is conventional, but the first and last books in particular show admirable common sense and humanity; and his work greatly influenced Renaissance education.
Quintus Fabius Pictor wrote his pioneering history of Rome during the Second Punic War, using public and private records and writing in Greek. His immediate successors followed suit. Latin historical writing began with Cato’s Origines. After him there were as many historiasters, or worthless historians, as the poetasters disdained by Cicero. The first great exception is Caesar’s Commentaries, a political apologia in the guise of unvarnished narrative. The style is dignified, terse, clear, and unrhetorical.
Sallust took Thucydides as his model. He interpreted, using speeches, and ascribed motives. In his extant monographs Bellum Catilinae and Bellum Jugurthinum, he displays a sardonic moralism, using history to emphasize the decadence of the dominant caste. The revolution in style he inaugurated gives him importance.
Livy began his 40 years’ task as Augustus came to power. His work consummated the annalistic tradition. If in historical method he fell short of modern standards, he had the literary virtues of a historian. He could vividly describe past events and interpret the participants’ views in eloquent speeches. He inherited from Cicero his literary conception of history, his copiousness, and his principle of accommodating style to subject. Indeed, he was perhaps the greatest of Latin stylists. His earlier books, where his imagination has freer play, are the most readable. In the later books, the more historical the times become, the more disturbing are his uncritical methods and his patriotic bias. Livy’s work now is judged mainly as literature.
Tacitus, on the other hand, stands higher now than in antiquity. Though his anti-imperial bias in attributing motives is plain, his facts can rarely be impugned; and his evocation of the terrors of tyranny is unforgettable. He is read for his penetrating characterizations, his drama, his ironical epigrams, and his unpredictability. His is an extreme development of the Sallustian style, coloured with archaic and poetic words, with a careful avoidance of the commonplace.
Suetonian biography apart, historiography thereafter degenerated into handbooks and epitomes until Ammianus Marcellinus appeared. He was refreshingly detached, rather ornate in style, but capable of vivid narrative and description. He continued Tacitus’ account from Domitian’s death to ad 378, more than half his work dealing with his own times.
The idea of comparing Romans with foreigners was taken up by Cornelius Nepos, a friend of Cicero and Catullus. Of his De viris illustribus all that survive are 24 hack pieces about worthies long dead and one of real merit about his friend Atticus. The very fact that Atticus and Tiro decided to publish nearly 1,000 of Cicero’s letters is evidence of public interest in people. Admiration of these fascinating letters gave rise to letter writing as a literary genre. The younger Pliny’s letters, anticipating publication, convey a possibly rose-tinted picture of civilized life. They are nothing to his spontaneous correspondence with Trajan, where one learns of routine problems, for instance with Christians confronting a provincial governor in Bithynia. The letter as a verse form, beginning with striking examples by Catullus, was established by Horace, whose Epistles carry still further the humane refinement of his gentler satires.
Suetonius’ lives of the Caesars and of poets contain much valuable information, especially since he had access to the imperial archives. His method was to cite in categories whatever he found, favourable or hostile, and to leave this raw material to the judgment of the reader. The Historia Augusta, covering the emperors from 117 to 284, is a collection of lives in the Suetonian tradition. Tacitus’ Agricola was an admiring, but not necessarily overcoloured, biographical study.
Some of the most valuable autobiography was incidental, such as Cicero’s account of his oratorical career in the Brutus. Horace’s largely autobiographical Epistles I was sealed with a miniature self-portrait. Ovid, in exile and afraid of fading from Rome’s memory, gave an invaluable account of his life in Tristia IV.
Philosophical and learned writings
The practical Roman mind produced no original philosopher. Apart from Lucretius the only name that demands consideration is Cicero’s. He was trained at Athens in the eclectic New Academy, and eclectic he apparently remained, seeking a philosophy to fit his own constitution rather than a logical system valid for all. He used the dialogue form, avowedly in order to make people think for themselves instead of following authority. Essentially, he was a philosophical journalist, composing works that became one of the means by which Greek thought was absorbed into early Christian thinking. The De officiis is a treatise on ethics. The dialogues do not follow the Platonic, or dialectic, pattern but the Aristotelian, in which speakers expounded already formed opinions at greater length.
Nor were the Romans any more original in science. Instead, they produced encyclopaedists such as Varro and Celsus. Pliny’s Natural History is a fascinating ragbag, especially valuable for art history, though it shows to what extent Hellenistic achievement in science had become confused or lost.
Cicero’s Brutus and the 10th book of Quintilian’s Institutio oratoria provide examples of general criticism. Cicero stressed the importance of a well-stocked mind and native wit against mere handbook technique. By Horace’s day, however, it had become more timely to insist on the equal importance of art. Some of Horace’s best criticism is in the Satires (I, 4 and 10; II, 1), in the epistle to Florus (II, 2), and in the epistle to Augustus (II, 1), a vindication of the Augustans against archaists. But it was his epistle to Piso and his sons (later called Ars poetica) that was so influential throughout Europe in the 18th century. It supported, among acceptable if trite theses, the dubious one that poetry is necessarily best when it mingles the useful (particularly moral) with the pleasing. Much of the work concerned itself with drama. The Romans were better at discussing literary trends than fundamental principles—there is much good sense about this in Quintilian, and Tacitus’ Dialogus is an acute discussion of the decline of oratory.
Republican and early imperial Rome knew no Latin fiction beyond such things as Sisenna’s translation of Aristides’ Milesian Tales. But two considerable works have survived from imperial times. Of Petronius’ Satyricon, a rambling picaresque novel, one long extract and some fragments remain. The disreputable characters have varied adventures and talk lively colloquial Latin. The description of the vulgar parvenu Trimalchio’s banquet is justly famous. Apuleius’ Metamorphoses (The Golden Ass) has a hero who has accidentally been changed into an ass. After strange adventures he is restored to human shape by the goddess Isis. Many passages, notably the story of Cupid and Psyche, have a beauty that culminates in the apparition of Isis and the initiation of the hero into her mysteries. | http://www.britannica.com/art/Latin-literature |
4.4375 | In structural geology, an anticline is a type of fold that is an arch-like shape and has its oldest beds at its core. A typical anticline is convex up in which the hinge or crest is the location where the curvature is greatest, and the limbs are the sides of the fold that dip away from the hinge. Anticlines can be recognized and differentiated from antiforms by a sequence of rock layers that become progressively older toward the center of the fold. Therefore, if age relationships between various rock strata are unknown, the term antiform should be used.
The progressing age of the rock strata towards the core and uplifted center, are the trademark indications for evidence of anticlines on a geologic map. These formations occur because anticlinal ridges typically develop above thrust faults during crustal deformations. The uplifted core of the fold causes compression of strata that preferentially erodes to a deeper stratigraphic level relative to the topographically lower flanks. Motion along the fault including both shortening and extension of tectonic plates, usually also deforms strata near the fault. This can result in an asymmetrical or overturned fold.
Terminology of anticlines and different folds
An antiform can be used to describe any fold that is convex up. It is the relative ages of the rock strata that separate anticlines from antiforms. The hinge of an anticline refers to the location where the curvature is greatest, also called the crest. The hinge is also the highest point on a stratum along the top of the fold. The culmination also refers to the highest point along any geologic structure. The limbs are the sides of the fold that display less curvature. The inflection point is the area on the limbs where the curvature changes direction. The axial surface is an imaginary plane connecting the hinge of each layer of rock stratum through the cross section of an anticline. If the axial surface is vertical and the angles on each side of the fold are equivalent, then the anticline is symmetrical. If the axial plane is tilted or offset then the anticline is asymmetrical. An anticline that is cylindrical has a well-defined axial surface, whereas non-cylindrical anticlines are too complex to have a single axial plane.
An overturned anticline is an asymmetrical anticline with a limb that has been tilted beyond perpendicular so that the beds in that limb have basically flipped over and may dip in the same direction on both sides of the axial plane. If the angle between the limbs is large (70-120 degrees), then the fold is an open fold, but if the angle between the limbs is small (30 degrees or less), then the fold is a tight fold. If an anticline plunges (i.e., the anticline crest is inclined to the Earth's surface), it will form Vs on a geologic map view that point in the direction of plunge. A plunging anticline has a hinge that is not parallel to the earth's surface. All anticlines and synclines have some degree of plunge. Periclinal folds are a type of anticlines that have a well-defined, but curved hinge line and are doubly plunging and thus elongate domes.
Folds in which the limbs dip toward the hinge and display a more U-like shape are called synclines. They usually flank the sides of anticlines and display opposite characteristics. A syncline's oldest rock strata are in its outer limbs; the rocks become progressively younger toward its hinge. A monocline is a bend in the strata resulting in a local steepening in only one direction of dip. Monoclines have the shape of a carpet draped over a stairstep.
An anticline that has been more deeply eroded in the center is called a breached or scalped anticline. Breached anticlines can become incised by stream erosion forming an anticlinal valley.
A structure that plunges in all directions to form a circular or elongate structure is a dome. Domes may be created via diapirism from underlying magmatic intrusions or upwardly mobile, mechanically ductile material such as rock salt (salt dome) and shale (shale diapir) that cause deformations and uplift in the surface rock. The Richat Structure of the Sahara is considered a dome that has been laid bare by erosion.
An anticline which plunges at both ends is termed a doubly plunging anticline, and may be formed from multiple deformations, or superposition of two sets of folds. It may also be related to the geometry of the underlying detachment fault and the varying amount of displacement along the surface of that detachment fault.
An anticlinorium is a large anticline in which a series of minor anticlinal folds are superimposed. Examples include the Late Jurassic to Early Cretaceous Purcell Anticlinorium in British Columbia and the Blue Ridge anticlinorium of northern Virginia and Maryland in the Appalachians, or the Nittany Valley in central Pennsylvania.
Anticlines are usually developed above thrust faults, so any small compression and motion within the inner crust can have large effects on the upper rock stratum. Stresses developed during mountain building or during other tectonic processes can similarly warp or bend bedding and foliation (or other planar features). The more the underlying fault is tectonically uplifted, the more the strata will be deformed and must adapt to new shapes. The shape formed will also be very dependent on the properties and cohesion of the different types of rock within each layer.
During the formation of flexural-slip folds, the different rock layers form parallel-slip folds to accommodate for buckling. A good way to visualize how the multiple layers are manipulated, is to bend a deck of cards and to imagine each card as a layer of rock stratum. The amount of slip on each side of the anticline increases from the hinge to the inflection point.
Passive-flow folds form when the rock is so soft that it behaves like weak plastic and slowly flows. In this process different parts of the rock body move at different rates causing shear stress to gradually shift from layer to layer. There is no mechanical contrast between layers in this type of fold. Passive-flow folds are extremely dependent on the rock composition of the stratum and can typically occur in areas with high temperatures.
Anticlines, structural domes, fault zones and stratigraphic traps are very favorable locations for oil and natural gas drilling. About 80 percent of the world’s petroleum has been found in anticlinal traps. The low density of petroleum causes oil to buoyantly migrate out of its source rock and upward toward the surface until it is trapped and stored in reservoir rock such as sandstone or porous limestone. The oil becomes trapped along with water and natural gas by a caprock that is made up of impermeable barrier such as an impermeable stratum or fault zone. Examples of low-permeability seals that contain the hydrocarbons, oil and gas, in the ground include shale, limestone, sandstone, and even salt domes. The actual type of stratum does not matter as long as it has low permeability.
Water, minerals and specific rock strata such as limestone found inside anticlines are also extracted and commercialized. Lastly, ancient fossils are often found in anticlines and are used for paleontological research or harvested into products to be sold.
Anticlines have a huge effect on the local geomorphology and economy of environments everywhere. One example of this is the El Dorado anticline in Kansas. The anticline was first tapped into for its petroleum in 1918. Soon after the site became a very prosperous area for entrepreneurs following World War I and the rapid popularization of motor vehicles. By 1995 the El Dorado oil fields had produced 300 million barrels of oil. The central Kansas uplift is an antiform composed of several small anticlines that have collectively produced more than 2.5 million barrels of oil.
Another notable anticline is the Tierra Amarilla anticline in San Ysidro, New Mexico. This is a popular hiking and biking site because of the great biodiversity, geologic beauty and paleontological resources. This plunging anticline is made up of Petrified Forest mudstones and sandstone and its caprock is made of Pleistocene and Holocene travertine. The anticline contains springs that deposit carbon dioxide travertine that help to contribute to the rich diversity of microorganisms. This area also contains remains of fossils and ancient plants from the Jurassic period that are sometimes exposed through geological erosion.
The Ventura Anticline is a geologic structure that is part of the Ventura oil fields, the seventh largest oil field in California that was discovered in the 1860s. The anticline runs east to west for 16 miles, dipping steeply 30-60 degrees at both ends. Ventura County has a high rate of compression and seismic activity due to the converging San Andreas Fault. As a result, the Ventura anticline rises at a rate of 5 mm/year with the adjacent Ventura Basin converging at a rate of about 7–10 mm/year. The anticline is composed of a series of sandstone rock beds and an impermeable rock cap under which vast reserves of oil and gas are trapped. Eight different oil bearing zones along the anticline vary greatly from 3,500 to 12,000 feet. The oil and gas formed these pools as they migrated upward during the Pliocene Era and became contained beneath the caprock. This oil field is still active and has a cumulative production of one billion barrels of oil making it one of the most vital historical and economic features of Ventura County.
Gallery of anticlines
Tight anticline in the Wills Creek Formation, Pennsylvania
Elk Basin is a breached anticline
Anticline in the Cambrian Conococheague Formation, in the wall of Holcim Quarry, Hagerstown, Maryland
The Cave Mountain Anticline, exposed on Cave Mountain, West Virginia (see Smoke Hole Canyon).
|Wikimedia Commons has media related to Anticlines.|
- Dictionary of Geological Terms (3rd ed.). Garden City, New York: Anchor Press/Doubleday. April 11, 1984. ISBN 978-0-385-18101-3.
- Hefferan, Kevin P. "Folds". Geology 320: Structural Geology. University of Wisconsin–Stevens Point. Retrieved December 8, 2015.
- Mantei, Erwin J. "Geologic Structures—Crustal Deformations". Physical Geology (GLG110). Missouri State University. Retrieved December 17, 2015.
- Marshak, Stephen (2012). Earth: Portrait of a Planet (4th ed.). Norton. ISBN 978-0393935189. Retrieved January 22, 2016.
- Roberts, Albert F. (1947). Geological Structures and Maps: A Practical Course in the Interpretation of Geological Maps for Civil and Mining Engineers. London: I. Pitman. p. 33.
- Monroe, James S.; Wicander, Reed (February 8, 2005). The Changing Earth: Exploring Geology and Evolution (4th ed.). Brooks Cole. ISBN 978-0-495-01020-3.[page needed]
- Earle, Steven (2015). "Folding". Physical Geology. Retrieved December 15, 2015.
- Riva, Joseph P. "Accumulation in reservoir beds". Encyclopedia Britannica. Retrieved December 10, 2015.
- Society of Petroleum Engineers Student Chapter (November 9, 2014). "Petroleum 101 – How is Petroleum Formed?". University of Waterloo. Retrieved December 10, 2015.
- Skelton, L.H. (1997). "The Discovery and Development of the El Dorado (Kansas) Oil Field". Northeastern Geology and Environmental Sciences 19 (1-2): 48–53. Retrieved December 15, 2015.
- Baars, D.L.; Watney, W. Lynn; Steeples, Don W.; Brostuen, Erling A. (April 2001). "Petroleum: a primer for Kansas – Structure". Kansas Geological Survey, Education. p. 5. Retrieved December 9, 2015.
- Hart, Dirk Van (February 2003). "Gallery of Geology – Tierra Amarilla Anticline" (PDF). New Mexico Geology 25 (1): 15. Retrieved December 15, 2015.
- Cron, Brandi; Crossey, Laura J.; Karlstrom, Karl E.; Northup, Diana E.; Takacs-Vesbach, Cristina (2009). "Microbial Diversity, Geochemistry and Diel Fluctuations in Travertine Mounds at Tierra Amarilla Anticline, New Mexico". Abstracts with Programs (Geological Society of America) 41 (7): 322. Retrieved December 8, 2015.
- O'Tousa, Jim (August 7, 2014). "Overview of the Geology of Ventura County including Seismicity, Oil and Gas Plays and Groundwater Resources" (PDF). Oil and Gas Program Informational Workshop. Ventura County Planning Commision.
- Jackson, Glenda (July 2013). "Oil and Ventura" (PDF). Ventura City Hall.
- Davis, George H.; Reynolds, Stephen J. (January 19, 1996). Structural Geology of Rocks and Regions (2nd ed.). New York: John Wiley & Sons. ISBN 978-0-471-52621-6.
- Monroe, James S.; Wicander, Reed (February 8, 2005). The Changing Earth: Exploring Geology and Evolution (4th ed.). Brooks Cole. ISBN 978-0-495-01020-3.
- Weijermars, Ruud (1997). Structural Geology and Map Interpretation. Lectures in geoscience. Amsterdam: Alboran Science Publishing. ISBN 90-5674-001-6 – via Delft University of Technology. —Entire book is freely available in PDF format. | https://en.wikipedia.org/wiki/Anticline |
4.0625 | |This article needs additional citations for verification. (December 2009)|
The term night sky refers to the sky as seen at night. The term is usually associated with astronomy, with reference to views of celestial bodies such as stars, the Moon, and planets that become visible on a clear night after the Sun has set. Natural light sources in a night sky include moonlight, starlight, and airglow, depending on location and timing. The Aurora Borealis and Aurora Australis light up the skies of the arctic and Antarctic Circle respectively. Occasionally, a large Coronal Mass Ejection from the sun or simply high levels of solar wind extend the phenomenon toward the equator
The night sky and studies of it have a historical place in both ancient and modern cultures. In the past, for instance, farmers have used the state of the night sky as a calendar to determine when to plant crops. Many cultures have drawn constellations between stars in the sky, using them in association with legends and mythology about their deities.
The anciently developed belief of astrology is generally based on the belief that relationships between heavenly bodies influence or convey information about events on Earth. The scientific study of the night sky and bodies observed within it, meanwhile, takes place in the science of astronomy.
The visibility of celestial objects in the night sky is affected by light pollution. The presence of the Moon in the night sky has historically hindered astronomical observation by increasing the amount of ambient lighting. With the advent of artificial light sources, however, light pollution has been a growing problem for viewing the night sky. Special filters and modifications to light fixtures can help to alleviate this problem, but for the best seeing both professional and amateur optical astronomers seek viewing sites located far from major urban areas.
The fact that the sky is not completely dark at night, even in the absence of moonlight and city lights, can be easily observed, since if the sky were absolutely dark, one would not be able to see the silhouette of an object against the sky.
The intensity of the sky varies greatly over the day and the primary cause differs as well. During daytime when the sun is above the horizon direct scattering of sunlight (Rayleigh scattering) is the overwhelmingly dominant source of light. In twilight, the period of time between sunset and sunrise, the situation is more complicated and a further differentiation is required. Twilight is divided in three segments according to how far the sun is below the horizon in segments of 6°.
After sunset the civil twilight sets in, and ends when the sun drops more than 6° below the horizon. This is followed by the nautical twilight, when the sun reaches heights of -6° and -12°, after which comes the astronomical twilight defined as the period from -12° to -18°. When the sun drops more than 18° below the horizon the sky generally attains its minimum brightness.
Stars appear as, depending on how dark the sky is, hundreds or thousands of white pinpoints of light in an otherwise black sky. To the naked eye, they all appear to be equidistant on a dome above the earth because stars are much too far away for stereopsis to offer any depth cues. Visible stars range in color from blue (hot) to red (cold), but with such small points of faint light, most look white because they stimulate the rod cells without triggering the cone cells. If it is particularly dark and a particularly faint celestial object is of interest, averted vision may be helpful.
The stars of the night sky cannot be counted unaided because they are so numerous and there is no way to track which have been counted and which have not. Further complicating the count, fainter stars may appear and disappear depending on exactly where the observer is looking. The result is an impression of an extraordinarily vast star field.
Because stargazing is best done from a dark place away from city lights, dark adaptation is important to achieve and maintain. It takes several minutes for eyes to adjust to the darkness necessary for seeing the most stars, and surroundings on the ground are hard to discern. A red flashlight (torch) can be used to illuminate star charts, telescope parts, and the like without undoing the dark adaptation. (See Purkinje effect).
There are no markings on the night sky, though there exist many sky maps to aid stargazers in identifying constellations and other celestial objects. Constellations are prominent because their stars tend to be brighter than other nearby stars in the sky. Different cultures have created different groupings of constellations based on differing interpretations of the more-or-less random patterns of dots in the sky. Constellations were identified without regard to distance to each star, but instead as if they were all dots on a dome.
Orion is among the most prominent and recognizable constellations. The Big Dipper (which has a wide variety of other names) is helpful for navigation in the northern hemisphere because it points to Polaris, the north star.
The pole stars are special because they are approximately in line with the Earth's axis of rotation so they appear to stay in one place while the other stars rotate around them through the course of a night (or a year).
Planets, named for the Greek word for "wanderer," process through the star field a little each day. Planets, to the naked eye, look like additional points of light in the sky. The disc is not apparent without binoculars or a telescope. Venus is the most prominent planet, often called the "morning star" or "evening star" because it is brighter than the stars and visible near sunrise or sunset depending on its location in its orbit. Mercury, Mars, Jupiter and Saturn are also visible to the naked eye.
Earth's Moon is a gray disc in the sky with cratering visible to the naked eye. It spans, depending on its exact location, 29-33 arcminutes - which is about the size of a thumbnail at arm's length, and is readily identified. Over 28 days, the moon goes through a full cycle of lunar phases. People can generally identify phases within a few days by looking at the moon. Unlike stars and most planets, the light reflected from the moon is bright enough to be seen during the day. (Venus can sometimes be seen even after sunrise.)
Some of the most spectacular moons come during the full moon phase near sunset or sunrise. The moon on the horizon benefits from the moon illusion which makes it appear larger. The light reflected from the moon traveling through the atmosphere also colors the moon orange and/or red.
Comets come to the night sky only rarely. Comets are illuminated by the sun, and their tails extend away from the sun. A comet with visible tail is quite unusual - a great comet appears about once a decade. They tend to be visible only shortly before sunrise or after sunset because those are the times they are close enough to the sun to show a tail.
Clouds obscure the view of other objects in the sky, though varying thicknesses of cloudcover have differing effects. A very thin cirrus cloud in front of the moon might produce a rainbow-colored ring around the moon. Stars and planets are too small or dim to take on this effect, and are instead only dimmed (often to the point of invisibility). Thicker cloudcover obscures celestial objects entirely, making the sky black or reflecting city lights back down. Clouds are often close enough to afford some depth perception, though they are hard to see without moonlight or light pollution.
On clear dark nights in unpolluted areas, when the moon is thin or below the horizon, a band of what looks like white dust, the Milky Way, can be seen.
Shortly after sunset and before sunrise, artificial satellites often look like stars—similar in brightness and size—but move relatively quickly. Those that fly in low Earth orbit cross the sky in a couple of minutes. Some satellites, including space debris, appear to blink or have a periodic fluctuation in brightness because they are rotating.
Meteors (commonly known as shooting stars) streak across the sky very infrequently. During a meteor shower, they may average one a minute at irregular intervals, but otherwise their appearance is a random surprise. The occasional meteor will make a bright, fleeting streak across the sky, and they can be very bright in comparison to the night sky.
Aircraft are also visible at night, distinguishable at a distance from other objects because their lights blink.
- Amateur astronomy
- Asterism (astronomy)
- Astronomical object
- Earth's shadow
- Yukon, Northwestel. "Aurora Borealis Explained". http://www.northernlightscentre.ca/northernlights.html. External link in
- "Starry Night at La Silla". ESO Picture of the Week. Retrieved 20 August 2013.
- "Paranal Nights". ESO Picture of the Week. Retrieved 7 January 2014.
- Kelly, William E.; Daughtry, Don. "Academic Orientation, Academic Achievement, and Noctcaelador: Does Interest in Night-Sky Watching Correlate with Students' Approach to the Academic Environment?". Questia. Retrieved 10 August 2014.
- Hawley. "Number of Stars in the Sky". NEWTON Ask A Scientist. US Department of Energy. Retrieved October 23, 2010.
- Dolan, Chris. "Orion". Retrieved 2007-10-05. | https://en.wikipedia.org/wiki/Night_sky |
4 | Skip to Content
Home > Patients & Visitors > Health Library > Tuberculosis (TB)
Tuberculosis (TB) is an
infection caused by slow-growing bacteria that grow best in areas of the body
that have lots of blood and oxygen. That's why it is most often found in the
lungs. This is called pulmonary TB. But TB can also
spread to other parts of the body, which is called
extrapulmonary TB. Treatment is often a success, but
it is a long process. It usually takes about 6 to 9 months to treat TB. But some TB infections need to be treated for up to 2 years.
Tuberculosis is either latent or active.
Pulmonary TB (in the
lungs) is contagious. It spreads when a person who has active TB breathes out
air that has the TB bacteria in it and then another person breathes in the
bacteria from the air. An infected person releases even more bacteria when he
or she does things like cough or laugh.
If TB is only in other
parts of the body (extrapulmonary TB), it does not spread easily to
Some people are more
likely than others to get TB. This includes people who:
It is important for people who are at a high risk for
getting TB to get tested once or twice every year.
Most of the time when
people are first infected with TB, the disease is so mild that they don't even
know they have it. People with
latent TB don't have symptoms unless the disease
Symptoms of active TB may include:
Doctors usually find latent
TB by doing a tuberculin skin test. During the skin test, a doctor or nurse
will inject TB
antigens under your skin. If you have TB bacteria in
your body, within 2 days you will get a red bump where the needle went into
your skin. The test can't tell when you became infected with TB or if it can be
spread to others. A blood test also can be done to look for TB.
To find pulmonary TB, doctors test a sample of
mucus from the lungs (sputum) to see if there are TB bacteria in it. Doctors
sometimes do other tests on sputum and blood or take a chest X-ray to help find pulmonary TB.
extrapulmonary TB, doctors can take a sample of tissue (biopsy) to test. Or you might get a
CT scan or an
MRI so the doctor can see pictures of the inside of
Most of the time, doctors
antibiotics to treat active TB. It's important to take
the medicine for active TB for at least 6 months. Almost all people are cured
if they take their medicine just like their doctors say to take it. If tests
still show an active TB infection after 6 months, then treatment continues for
another 2 or 3 months. If the TB bacteria are resistant to several antibiotics (multidrug-resistant TB), then treatment may be needed for a year or longer.
People with latent TB may be treated
with one antibiotic that they take daily for 9 months or with a combination of antibiotics that they take once a week for 12 weeks while being watched by a health professional. Making sure every dose is taken reduces their risk
for getting active TB.
If you miss doses of your medicine, or if
you stop taking your medicine too soon, your treatment may fail or have to go
on longer. You may have to start your treatment over again. This can also cause
the infection to get worse or may lead to an infection that is
resistant to antibiotics. This is much harder to
TB can only be cured if you take all the doses of your
medicine. A doctor or nurse may have to watch you take it to make sure that you
never miss a dose and that you take it the proper way. You may have to go to
the doctor's office every day. Or a nurse may come to your home or work. This
is called direct observational treatment. It helps people follow all of the
instructions and keep up with their treatment, which can be complex and take a
long time. Cure rates for TB have greatly improved because of this type of
If active TB is not treated, it can damage your lungs
or other organs and can be deadly. You can also spread TB by not treating an active TB infection.
Learning about tuberculosis (TB):
Living with tuberculosis:
(TB) is caused by Mycobacterium tuberculosis,
slow-growing bacteria that thrive in areas of the body that are rich in blood
and oxygen, such as the lungs.
If you have
latent tuberculosis (TB), you do not have symptoms and
cannot spread the disease to others. If you have active TB, you do have
symptoms and can spread the disease to others. Which specific symptoms you have
will depend on whether your TB infection is in your lungs (the most common
site) or in another part of your body (extrapulmonary TB).
There are other
conditions with symptoms similar to TB, such as
pneumonia and lung cancer.
active TB in the lungs begin gradually and develop over a period of weeks or
months. You may have one or two mild symptoms and not even know that you have
Common symptoms include:
Symptoms of TB outside the lungs (extrapulmonary TB) vary widely depending on which area of the body is infected. For
example, back pain can be a symptom of TB in the spine, or your neck may get
lymph nodes in the neck are infected.
(TB) develops when Mycobacterium tuberculosis bacteria
are inhaled into the lungs. The infection usually stays in the lungs. But the
bacteria can travel through the bloodstream to other parts of the body (extrapulmonary TB).
An initial (primary)
infection can be so mild that you don't even know you have an infection. In a
person who has a healthy
immune system, the body usually fights the infection
by walling off (encapsulating) the bacteria into tiny capsules called
tubercles. The bacteria remain alive but cannot spread to surrounding tissues
or other people. This stage is called
latent TB, and most people never go beyond it.
A reaction to a
tuberculin skin test is how most people find out they
have latent TB. It takes about 48 hours after the test for a reaction to
develop, which is usually a red bump where the needle went into the skin. Or
you could have a rapid blood test that provides results in about 24
If a person's immune system becomes unable to prevent the
bacteria from growing, the TB becomes active. Of people who have latent TB, 5%
to 10% (1 to 2 people out of 20) will develop active TB at
some point in their lives.footnote 1
Active TB in the lungs
(pulmonary TB) is contagious. TB spreads when a person who has active disease
exhales air that contains TB-causing bacteria and another person inhales the
bacteria from the air. These bacteria can remain floating in the air for
several hours. Coughing, sneezing, laughing, or singing releases more bacteria
In general, after 2 weeks of treatment with
antibiotics, you cannot spread an active pulmonary TB
infection to other people.
Skipping doses of medicine can delay a
cure and cause a relapse. In these cases, you may need to start treatment over.
Relapses usually occur within 6 to 12 months after treatment. Not taking the
full course of treatment also allows
antibiotic-resistant strains of the bacteria to
develop, making treatment more difficult.
active TB can cause serious complications, such as:
TB can be fatal if it is not treated.
Active TB in parts of
the body other than the lungs (extrapulmonary TB) is not spread easily
to other people. You take the same medicines that are used to treat pulmonary
TB. You may need other treatments depending on where in your body the infection
is growing and how severe it is.
Infants and children and people with
HIV or AIDS who have active TB need special care.
People are at increased
risk of infection with
tuberculosis (TB) when they:
People who have an infection that cannot spread to others
(latent TB infection) are at risk of developing active
TB if they:
Call your doctor immediately if you have:
Call your doctor if you:
Health professionals and public health agencies can
help you discover whether you have tuberculosis (TB). These include:
Health professionals and public health agencies can also
help you with treatment. They include:
If you have
multidrug-resistant TB (MDR-TB), you may need to go to a
special treatment center that treats this type of TB.
To prepare for your appointment, see the topic Making the Most of Your Appointment.
Doctors diagnose active
tuberculosis (TB) in the lungs (pulmonary TB) by using
a medical history and physical exam, and by checking
your symptoms (such as an ongoing cough, fatigue, fever, or night sweats).
Doctors will also look at the results of a:
Diagnosing TB in
other parts of the body (extrapulmonary TB) requires more
testing. Tests include:
HIV infection is often done at the time of TB
diagnosis. You may also have a blood test for
a sputum culture is done once a month—or more often—to
make sure that the antibiotics are working. You may have a chest X-ray at the end of
treatment to use as a comparison in the future.
You may have tests
to see if TB medicines are harming other parts of your body. These tests may
Public health officials encourage
screening for people who are at risk for getting TB.
tuberculosis (TB) with
antibiotics to kill the TB bacteria. These medicines
are given to everyone who has TB, including infants, children, pregnant women,
and people who have a
weakened immune system.
Health experts recommend:footnote 4
Experts recommend one of the following:
recommended for anyone with a skin test that shows a TB infection, and is
especially important for people who:
Treatment for tuberculosis in parts of the body other than the lungs
(extrapulmonary TB) usually is the same as for
pulmonary TB. You may need other medicines or forms of treatment depending on
where the infection is in the body and whether complications develop.
You may need treatment in a hospital if you
If treatment is not successful, the TB infection can flare up again (relapse). People who have relapses usually have them within 6 to 12 months after treatment. Treatment for relapse is based on the severity of the disease and
which medicines were used during the first treatment.
tuberculosis (TB) is very contagious. The World Health
Organization (WHO) estimates that one-third of the world's population is
infected with the bacteria that cause TB.
getting an active TB infection:
A TB vaccine (bacille Calmette-Guerin, or BCG) is used in many
countries to prevent TB. But this vaccination is almost never used in the
United States because:
Home treatment for
tuberculosis (TB) focuses on taking the medicines
correctly to reduce the risk of developing
During treatment for
TB, eat healthy foods and get enough sleep and some exercise to help your body
fight the infection.
If you are losing too much weight, eat
balanced meals with enough protein and calories to help you keep weight
on. If you need help,
ask to talk with a
Because TB treatment takes so
long, it is normal to:
Your doctor or health department can help you find a
counselor or social worker to help you cope with your feelings. If you cannot
afford counseling or treatment, there may be places that offer free or less costly
antibiotics are used at the same time to treat active
tuberculosis (TB) disease. For people who have
multidrug-resistant TB, treatment may continue for as
long as 24 months. These antibiotics are given as pills or injections.
For active TB, there are
different treatment recommendations for children, pregnant women, people who have HIV and TB, and people who have
TB disease that occurs in parts
of your body other than the lungs (extrapulmonary TB) usually is treated with the same medicines and for the same length
of time as active TB in the lungs (pulmonary TB). But TB throughout the body
(miliary TB) or TB that affects the brain or the bones and joints in children
may be treated for at least 12 months.
medicines also may be given in some severe cases to reduce inflammation. They
may be helpful for children at risk of central nervous system problems caused
by TB and for people who have conditions such as high fever, TB throughout the
body (miliary TB),
One antibiotic usually is used to treat
latent TB infection, which cannot be spread to others but can develop into
active TB disease. The antibiotic usually is taken for 4 to 9 months. Or more than one antibiotic may be taken once a week for 12 weeks. For this treatment, a health professional watches you take each dose of antibiotics. Taking every dose of antibiotic helps prevent the TB bacteria from getting resistant to the antibiotics.
Multiple-drug therapy to treat TB usually involves
taking four antibiotics at the same time. This is the standard treatment for
Other anti-tuberculosis medicines may be used for people with multidrug-resistant TB.
If you miss doses of medicine or you stop treatment too soon,
your treatment may go on longer or you may have to start over. This can also
cause the infection to get worse, or it may lead to antibiotic-resistant infections
that are much harder to treat.
Taking all of the medicines is especially
important for people who have an impaired immune system. They may be at an
increased risk for a relapse because the original TB infection was never
Surgery is rarely used to treat
tuberculosis (TB). But it may be used to treat
extensively drug-resistant TB (XDR-TB) or to treat complications of an
infection in the lungs or another part of the body.
Surgery has a high success rate, but it also has a risk of
complications, which may include infections other than TB and shortness of
breath after surgery.
may be needed to remove or repair organs damaged by TB in parts of the body
other than the lungs (extrapulmonary TB) or to prevent other
rare complications, such as:
Pasipanodya J, et al. (2015). Tuberculosis and other mycobacterial diseases. In ET Bope et al., eds., Conn's Current Therapy 2015, pp. 411–417. Philadelphia: Saunders.
Ludvigsson JF, et al. (2007). Coeliac disease and risk of tuberculosis: A population based cohort study. Thorax, 62(1): 23–28.
Centers for Disease Control and Prevention (2005). Guidelines for using the QuantiFERON®-TB test for diagnosing latent Mycobacterium tuberculosis infection. MMWR, 54(RR-15): 49–55.
American Thoracic Society, Centers for Disease Control and Prevention, Infectious Diseases Society of America (2003). Treatment of tuberculosis. American Journal of Respiratory and Critical Care Medicine, 167(4): 603–662.
Centers for Disease Control and Prevention (2011). Recommendations for use of an isoniazid-rifapentine regimen with direct observation to treat latent Mycobacterium tuberculosis infection. MMWR, 60(48): 1650–1653. Available online: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm6048a3.htm?s_cid=mm6048a3_w.
Other Works Consulted
Akolo C, et al. (2010). Treatment of latent tuberculosis infection in HIV infected persons. Cochrane Database of Systematic Reviews (1).
Centers for Disease Control and Prevention (2012). Reported Tuberculosis in the United States, 2011. Atlanta: U.S. Department of Health and Human Services. Also available online: http://www.cdc.gov/tb/statistics/reports/2011/default.htm.
U.S. Centers for Disease Control and Prevention (2010). Updated guidelines for using interferon gamma release assays to detect Mycobacterium tuberculosis infection—United States, 2010. MMWR, 59(RR-05): 1–25. Available online: http://www.cdc.gov/mmwr/preview/mmwrhtml/rr5905a1.htm?s_cid=rr5905a1_e.
World Health Organization (2011). Guidelines for intensified tuberculosis case-finding and isoniazid preventive therapy for people living with HIV in resource-constrained settings. Available online: http://www.who.int/hiv/pub/tb/9789241500708/en.
World Health Organization (2011). Guidelines for the programmatic management of drug-resistant tuberculosis: 2011 update. European Respiratory Journal, 38(3): 516–528.
Ziganshina LA, Eisenhut M (2011). Tuberculosis (HIV-negative people), search date July 2010. BMJ Clinical Evidence. Available online: http://www.clinicalevidence.com.
ByHealthwise StaffPrimary Medical ReviewerE. Gregory Thompson, MD - Internal MedicineSpecialist Medical ReviewerR. Steven Tharratt, MD, MPVM, FACP, FCCP - Pulmonology, Critical Care Medicine, Medical Toxicology
Current as ofJuly 6, 2015
Current as of:
July 6, 2015
E. Gregory Thompson, MD - Internal Medicine & R. Steven Tharratt, MD, MPVM, FACP, FCCP - Pulmonology, Critical Care Medicine, Medical Toxicology
To learn more about Healthwise, visit Healthwise.org.
© 1995-2015 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
Feeling under the weather?
Use our interactive symptom checker to evaluate your symptoms and determine appropriate action or treatment.
©2016 Cottage Health | http://www.cottagehealth.org/patients-visitors/health-library/health-library-document-viewer/?id=hw207301 |
4.28125 | In the lesson, our professor Rebekah Hendershot goes through an introduction on complex rhetorical modes. She starts by explaining what a rhetorical mode is and the process analysis, then discusses cause and effect, definition, description, narration, induction and deduction.
A rhetorical mode is a common pattern of argument.
Studying rhetorical modes will give you ready-made approaches to writing your essays on the exam.
Some of the multiple-choice questions on the test will also use terminology associated with rhetorical modes.
1. Process Analysis
In this rhetorical mode, the writer uses a step-by-step process to explain either how to do something or how something was done. Usually, there are examples to spice things up.
You’ll usually want to describe a process in chronological order; think about recipes.
Use transition words (first, next, finally, etc.) to make the stages of the process clear.
Use appropriate terminology—avoid jargon that a reader unfamiliar with your process will not recognize (French drop).
Make sure that every step is clear and nothing is left out.
2. Cause and Effect
In this rhetorical mode, the writer explains why things should be or should have been done—why things work.
This mode is all about finding underlying causes.
Don’t confuse a connection in time or space with true cause and effect. The rooster’s crowing doesn’t make the sun come up!
Use carefully chosen examples to turn causal relationships into cause-and-effect explanations.
Make sure to address each step in a series of causal relationships.
In this rhetorical mode, the writer uses a variety of rhetorical techniques to define a term. These techniques may include analogy, negation, classification, and examples.
Keep your reason for defining something in mind as you’re writing.
Define key terms according to what you know of your audience—don’t bore your reader by over-defining or confuse him or her by leaving obscure terms undefined.
Explain the background when it’s relevant.
Define by negation when appropriate.
Combine definition with any number of other rhetorical modes when applicable.
In this rhetorical mode, the writer uses sensory details and the techniques of subjective and objective description to hold the reader’s interest and convey vital information.
When possible, use all five senses!
Place the most striking examples at the beginnings and ends of your paragraphs for maximum effect.
Show, don’t tell.
Use concrete nouns and adjectives—preferably nouns.
Concentrate on details that will convey your impression most effectively.
Employ figures of speech, especially similes, metaphors, and personification, when appropriate.
When describing people, try to focus on distinctive mannerisms; if possible, go beyond physical appearance.
Dialogue and quotations are your friends!
A brief anecdote is worth a thousand abstract words.
Whenever possible, use action verbs.
In this rhetorical mode, the writer arranges information in chronological order to tell a story.
When possible, structure the events in chronological order.
Make your story complete: use a beginning, middle, and end.
Provide a realistic setting, especially at the beginning.
Whenever possible, use action verbs.
Provide concrete and specific details.
Show, don’t tell. (Use anecdotes and examples.)
Establish a clear point of view.
Include appropriate amounts of dialogue (or quotations).
6. Induction and Deduction
In induction, the writer uses specific examples to reach a general conclusion.
In deduction, the writer uses generalizations to draw conclusions about a specific case.
When using inductive reasoning, proceed from the specific to the general.
Make sure you have enough specific information to make a generalization!
When using deductive reasoning, proceed from the general to the specific.
Make sure your generalization is credible and relevant to your specific situation before you apply it!
Complex Rhetorical Modes
Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.
The book features an effective, 5-step plan to guide your preparation program and help you build the skills, knowledge, and test-taking confidence you need to succeed. This fully revised edition covers the latest course syllabus and matches the latest exam. It also includes access to McGraw-Hill Education’s AP Planner app, which will enable you to create your own customized study schedule on your mobile device. There are 3 complete practice exams included, 3 separate study plans, and access to online quizzes.
This book features everything you need to score a perfect 5. Equip yourself to ace the AP English Language & Composition Exam with The Princeton Review's comprehensive study guide—including thorough content reviews, targeted strategies for every question type, and 2 full-length practice tests with complete answer explanations.
Grammarly is the world's leading software suite for perfecting written English. It checks for more than 250 types of spelling, grammar, and punctuation errors, enhances vocabulary usage, and suggests citations. | http://www.educator.com/language/english/ap-english-language-composition/hendershot/complex-rhetorical-modes.php |
4.28125 | Increased atmospheric carbon dioxide is bad for the planet and for human health, leading scientists and engineers alike to search for ways to capture this pollutant in a process known as carbon sequestration.
But new research has shown that putting C02 from cars and coal use out of sight might not really solve the problem.
According to Stanford geophysicist Mark Zoback, storing carbon dioxide underground could trigger small earthquakes that might breach the storage system, allowing the gas back into the atmosphere (Stanford).
Carbon sequestration is the placement of CO2 into a repository in such a way that it will remain permanently sequestered. Efforts are focused on two categories of repositories: geologic formations and terrestrial ecosystems (NETL).
Even though the saline aquifers that make the best storage areas for carbon emissions are deep within the earth, they are made of dense, well-cemented sedimentary rock, with low permeability. This means that filling them with large amounts of pressure may trigger earthquake activity.
“It is not the shaking an earthquake causes at the surface that creates the hazard in this instance, it is what it does at depth,” Zoback told Stanford Univeristy News. “It may not take a very big earthquake to damage the seal of an underground reservoir that has been pumped full of carbon dioxide.”
Although it’s unlikely the earthquakes would be big enough to be a safety concern, allowing the carbon to seep back into the atmosphere would render the very costly and complex process of sequestration futile.
Stanford University News reports that “there are two sequestration projects already under way, in Norway and Algeria, and so far they appear to be working as planned. But Zoback said 3,400 such projects would be needed worldwide by midcentury to deal with the volume of carbon dioxide that we will be generating.”
Image Credit: Flickr - madiko83 | http://www.care2.com/causes/carbon-sequestration-lead-to-more-earthquakes.html |
4.25 | The landslides are simply defined as the mass movement of rock, debris or earth down and have to include a broad range of motions whereby falling, sliding and flowing under the influence of gravity dislodges earth material. They often take place in conjunction with earthquakes, floods and volcanoes. The Himalayan Mountain, the north-east hill ranges and the Western Ghats and the Nilgiris experience considerable landslide activities of varying intensities. Also called landslip; Downward mass movement of earth or rock on unstable slope including many forms resulting from differences in rock structure, coherence of material involved, degree of slope, amount of included water, extent of natural or artificial undercutting at the base of slope, relative rate of movement and relative quantity of material involved. Many terms cover these variations: creep, earth flow, mudflow, solifluction and debris avalanche are related forms in which mass movement is by flowage. If shearing movement occurs on a surface on consolidated rock, the dislocated mass is a debris slide. Cliffs may become so steep through undercutting by rivers, glaciers or waves that masses of rocks will fall freely and constitute a rock-all type of landslide.
Causes of Landslides:
There are several factors which lead to the occurrence of landslides. Seismic activity, intensity of rainfall, steep slopes, rigidity of slopes, highly weathered rock layers, soil layers formed under gravity, poor drainage these all are natural factors that cause the landslides. Not only this there are many man-made factors also which contribute to the occurrence of landslides. These are land use pattern, non-engineered construction, mining and quarrying, non-engineered excavation and deforestation leading to soil erosion. Protection Measures:
Generally landslides happen where they have already occurred in the past, or in identifiable hazard locations. Following are the areas that are distinctly considered safe from landslides: I. Areas that have not moved in the past
II. Relatively flat areas away from sudden changes in slope
III. Areas at the top of or along ridges but set back from the edge of slopes. However, the homes built at the toe of steep slopes are frequently vulnerable to slides and debris flows that originate on property controlled by others. Adoption of slope stabilizing methods and professional site investigations by an engineering geologist and a technical Engineer has shown to reduce the landslide damage to over 95%. But in many situations preventing landslides may be impractical Snow Avalanche:
Large mass of snow or rock debris that moves rapidly down a mountain slope sweeping and grinding everything in its path. An avalanche begins when a mass of material overcomes frictional resistance of the sloping surface, often after its foundation is loosened by rains or is melted by a warm and dry wind. Variations caused by loud noises such as artillery fire, thunder or blasting can start the mass in motion. Some snow avalanches develop during heavy snowstorms and slide while snow is still falling more often they occur after the snow has accumulated at the given site. The wet avalanche is perhaps the most dangerous of its large weight, heavy texture and the tendency to solidify as soon as it stops moving. The dry type is also very dangerous because its entraining of great amounts of air makes it act like a fluid; this kind of avalanche may flow up the opposite side of a narrow valley. Avalanches carry a considerable amount of rock debris along with snow and therefore are significant geological agents; in addition to transporting unsorted materials to the bottoms of slopes, they may, if repeated, cause an important amount of erosion. From the above definitions and descriptions, it will be seen that landslides and snow avalanches are phenomena of mountain regions and both involve the swift and sudden movement of large masses of material falling or slipping down a... | http://www.studymode.com/essays/Natural-Disasters-In-India-1831980.html |
4.03125 | Extreme Cold Weather FAQ
What is hypothermia?
When exposed to cold temperatures, your body begins to lose heat faster than it can be produced. The result is hypothermia, or abnormally low body temperature.
Body temperature that is too low affects the brain, making the victim unable to think clearly or move well. This makes hypothermia particularly dangerous because a person may not know it is happening and won't be able to do anything about it.
Hypothermia occurs most commonly at very cold environmental temperatures, but can occur even at cool temperatures (above 40°F) if a person becomes chilled from rain, sweat, or submersion in cold water.
Who is most at risk for hypothermia? Victims of hypothermia are most often:
- Elderly people with inadequate food, clothing, or heating
- Babies sleeping in cold bedrooms
- Children left unattended
- Adults under the influence of alcohol
- Disabled individuals
- People who remain outdoors for long periods-the homeless, hikers, hunters, etc.
What are the warning signs for hypothermia?
- Confusion/fumbling hands
- Memory loss/slurred speech
- Drowsiness Infants:
- Bright red, cold skin
- Very low energy
What should I do if I see someone with warning signs of hypothermia? If you notice signs of hypothermia, take the person's temperature. If it is below 95°F (35°C), the situation is an emergency-get medical attention immediately. If medical care is not available, begin warming the person, as follows:
- Get the victim into a warm room or shelter.
- If the victim has on any wet clothing, remove it.
- Warm the center of the body first-chest, neck, head, and groin-using an electric blanket, if available. Or use skin-to-skin contact under loose, dry layers of blankets, clothing, towels, or sheets.
- Warm beverages can help increase the body temperature, but do NOT give alcoholic beverages. Do not try to give beverages to an unconscious person.
- After body temperature has increased, keep the person dry and wrapped in a warm blanket, including the head and neck.
- Get medical attention as soon as possible.
A person with severe hypothermia may be unconscious and may not seem to have a pulse or to be breathing. In this case, handle the victim gently, and get emergency assistance immediately.
Even if the victim appears dead, CPR should be provided. CPR should continue while the victim is being warmed, until the victim responds or medical aid becomes available. In some cases, hypothermia victims who appear to be dead can be successfully resuscitated.
What is frostbite? Frostbite is an injury to the body that is caused by freezing. Frostbite causes a loss of feeling and color in affected areas. It most often affects the nose, ears, cheeks, chin, fingers, or toes. Frostbite can permanently damage the body, and severe cases can lead to amputation.
What are the warning signs of frostbite? At the first signs of redness or pain in any skin area, get out of the cold or protect any exposed skin—frostbite may be beginning. Any of the following signs may indicate frostbite:
- A white or grayish-yellow skin area
- Skin that feels unusually firm or waxy
Note: A victim is often unaware of frostbite until someone else points it out because the frozen tissues are numb.
What should I do if I see someone with warning signs of frostbite? If you detect symptoms of frostbite, seek medical care. Because frostbite and hypothermia both result from exposure, first determine whether the victim also shows signs of hypothermia, as described previously. Hypothermia is a more serious medical condition and requires emergency medical assistance.
If (1) there is frostbite but no sign of hypothermia and (2) immediate medical care is not available, proceed as follows:
- Get into a warm room as soon as possible.
- Unless absolutely necessary, do not walk on frostbitten feet or toes-this increases the damage.
- Immerse the affected area in warm-not hot-water (the temperature should be comfortable to the touch for unaffected parts of the body).
- Or, warm the affected area using body heat. For example, the heat of an armpit can be used to warm frostbitten fingers.
- Do not rub the frostbitten area with snow or massage it at all. This can cause more damage.
- Don't use a heating pad, heat lamp, or the heat of a stove, fireplace, or radiator for warming. Affected areas are numb and can be easily burned.
Protect yourself when it is extremely cold
- The World Health Organization recommends keeping indoor temperatures between 64 and 75 degrees Fahrenheit for healthy people. The minimum temperature should be kept above 68 degrees Fahrenheit to protect the very young, the elderly, or people with health problems.
- Watch out for signs of hypothermia. Early signs of hypothermia in adults include shivering, confusion, memory loss, drowsiness, exhaustion and slurred speech. Infants who are suffering from hypothermia may appear to have very low energy and bright red, cold skin.
- When outside, take extra precautions to reduce the risk of hypothermia and frostbite. Dress appropriately; ensure the outer layer of clothing is tightly woven to guard against loss of body heat. When outdoors, don’t ignore the warnings signs. Shivering is an important first sign that the body is losing heat and a signal to quickly return indoors.
- For those with cardiac problems or high blood pressure, follow your doctor's orders about shoveling or performing any strenuous exercise outside. Healthy adults should always dress appropriately and work slowly when doing heavy outdoor chores.
Stay safe while heating your home
Take precautions to avoid exposure to dangerous levels of carbon monoxide.
- Carbon monoxide (CO) is a potentially deadly gas. It is colorless, odorless, tasteless and non-irritating. It is produced by burning fuels such as wood, oil, natural gas, kerosene, coal and gasoline.
- Symptoms of carbon monoxide poisoning are similar to the flu but do not include a fever. At lower levels of exposure, a person may experience a headache, fatigue, nausea, vomiting, dizziness, and shortness of breath. Exposure to very high levels of carbon monoxide can result in loss of consciousness and even death.
- For more information see:
- If you use a fireplace, wood stove, or portable kerosene heater to stay warm, be sure there is adequate ventilation to the outside. Without enough fresh air, carbon monoxide fumes can build up in your home. Never use a natural gas or propane stove/oven to heat your home. If you are using a kerosene heater, use 1-K grade kerosene only. Never substitute with fuel oil, diesel, gasoline or yellow (regular) kerosene.
- Open a window to provide ventilation when a portable kerosene heater is in use to reduce carbon monoxide fumes inside the home. If you plan to cook on a barbeque grill or camp stove, remember these also produce carbon monoxide and are for outdoor use only.
- Wood stoves, space heaters, electric heaters, kerosene heaters and pellet stoves can be dangerous unless proper safety precautions are followed. Learn more at http://www.health.ny.gov/environmental/indoors/heaters/
- Never try to thaw a pipe with an open flame or torch and be aware of the potential for electric shock in and around standing water. To keep water pipes from freezing in the home let faucets drip a little to avoid freezing, open cabinet doors to allow more heat to get to un-insulated pipes under a sink or appliance near an outer wall. Keep the heat on and set no lower than 55 degrees.
Never run a generator in your home or indoor spaces, such as garages, basements, porches, crawlspaces or sheds, or in partly enclosed spaces such as carports or breezeways. Generators should only be operated outside, far away from (25 feet or more if possible) and downwind of buildings. Carbon monoxide in the generator's fumes can build up and cause carbon monoxide poisoning, which can lead to death. Do not exceed the rated capacity of your generator. Overloading your generator can damage it and any appliances connected to it. Fire may result. Be sure to follow the manufacturer's instructions. Fuel spilled on a hot generator can cause an explosion. If your generator has a detachable fuel tank, remove it before refilling. If this is not possible, shut off the generator and let it cool before refilling.
When adding fuel to a space heater, or wood to a wood stove or fireplace, wear non-flammable gloves. Never add fuel to a space heater when it is hot. The fuel can ignite, burning you and your home. Keep the heater away from objects that can burn, such as furniture, rugs or curtains. If you have a fire extinguisher, keep it nearby. Be careful with candles--never leave them burning if you leave the room. Keep children away from space heaters, fireplaces and wood stoves to avoid accidental burns.
Cleaning up from a snowstorm is hard work. Before you pick up a snow shovel, consider your physical condition. If you have cardiac problems or high blood pressure, follow your doctor's orders about shoveling or performing any strenuous exercise outside.
Even otherwise-healthy adults should remember to dress appropriately and work slowly when doing heavy outdoor chores.
Check on your family or neighbors and find out how they're doing. Make sure they know what to do--and what not to do--to protect their health.
More information and precautions about cold weather can be found at: http://www.health.ny.gov/environmental/emergency/weather/cold/cold_weather_tips.htm | http://www.aging.ny.gov/News/2014/2014News01.cfm |
4.40625 | Statics/Vector Math< Statics
- 1 Adding Vectors
- 2 Multiplying Vectors
- 3 Vector Rules
- 4 External Links
Let's say you have a box on the ground, and the box is being pulled in two directions with a certain force. You can predict the motion of the box by finding the net force acting on the box. If each force vector (where the magnitude is the tension in the rope, and the direction is the direction that the rope is "pointing") can be measured, you can add these vectors to get the net force. There are two methods for adding vectors:
This is a graphical method for adding vectors. First, a little terminology:
- The tail of a vector is where it originates.
- The head of a vector is where it goes. The head is the end with the arrowhead.
This method is most easily executed using graph paper. Establish a rectangular coordinate system, and draw the first vector to scale with the tail at the origin. Then, draw the second vector (again, to scale) with its tail coincident with the head of the first vector. Then, the properties of the sum vector are as follows:
- The length of the sum vector is the distance measured from the origin to the head of the second vector.
- The direction of the sum vector is the angle.
In the image at the right, the vectors (10, 53°07'48") and (10, 36°52'12") are being added graphically. The result is (19.80, 45°00'00"). (How did I measure out those angles so precisely? I did that on purpose.)
The native vector format for the parallelogram method is the 'polar form'.
When you use the computational method, you must resolve each vector into its x- and y-components. Then, simply add the respective components.
Converting Polar Vectors to Rectangular VectorsEdit
If a vector is given by (r, θ), where r is the length and θ is the direction,
- x = r cos θ
- y = r sin θ
Converting Rectangular Vectors to Polar VectorsEdit
If a vector is given by ,
Remember that the arctan() function only returns values in the range [-π/2, π/2]; therefore, if your vector is in the second or third quadrant, you will have to add π to whatever angle is returned from the arctan() function.
Again referring to the image at the right, notice that the first vector can be expressed as , and the second is equivalent to . (Verify this.) Then, you simply add the components:
You should verify that is equal to (19.80, 45°00'00").
There are two ways to multiply vectors. I will not get into specific applications here; you will see many of those as you progress through the book.
The Dot ProductEdit
The dot product of two vectors results in a scalar. The dot product is the sum of the product of the components. For example:
< 1 , 2 > ∙ < 3 , 4 > ----------- | +-----> 2 x 4 = 8 +---------> 1 x 3 = 3 ------ 11
A useful relation between vectors, their lengths, and the angle between them is given by the definition of the dot product:
- and are the vectors.
- and are the vectors' magnitude.
- is the angle between the vectors.
The Cross ProductEdit
The cross product of two vectors results in another vector. The cross product is only applicable to 3-space vectors. Remember the three unit vectors:
- is the unit vector along the x-axis
- is the unit vector along the y-axis
- is the unit vector along the z-axis
Now if you have two vectors and , the cross product is given by solving a determinant as follows:
For more information on how to solve a determinant, consult the external links below.
The cross product of two vectors, the lengths of those vectors, and the short angle between the vectors is given by the following relation:
The Right-Hand RuleEdit
Geometrically, the cross product gives a vector that is perpendicular to the two arguments. Notice the reference to a vector, not the vector. This is because there are infinitely many vectors that are normal to two non-zero vectors. The direction of the cross product can be determined using the right-hand rule: Extend the fingers of your right hand, lay your straightened hand along the first vector, pointing your finger tips in the same direction as the vector. Curl your fingers through the short angle from the first vector to the second vector. Your thumb will point in the direction of the product vector.
Dots and Crosses of the Unit VectorsEdit
- A unit vector dotted into itself gives one.
- A unit vector dotted into a different unit vector gives zero.
Order the unit vectors in this order: . Start at the first vector, move to the second vector, and keep going to the cross-product. If you moved immediately to the right first, the answer is positive. If you moved to the left first, the answer is negative. For example:
Given vectors and scalar r: | https://en.m.wikibooks.org/wiki/Statics/Vector_Math |
4.03125 | Cerenkov radiation is emitted when a charged particle passes through a medium with a velocity greater than the velocity of light in that medium. In contrast to Bremsstrahlung emission—which is mainly due to interactions between charged particles and nuclei—the emission of Cerenkov radiation is a property of the gross structure of the absorber material. As a charged particle passes through the bound electrons of a material, it induces a polarization of the medium along its path. The time variation of this polarization can lead to radiation. If the charge is moving slowly, the phase relations will be random for radiation from different points along the path and no coherent wave front will be formed. However, if the particle velocity (v) is greater than the velocity of light in the medium (c/n), where n is the Refractive Index, then a coherent wave front can be formed and radiation emitted. The radiation will be in phase and will be emitted in a forward cone of semi-angle θ where:
As the velocity increases, cos θ decreases and θ increases; thus, the cone opens out in contrast to bremsstrahlung emission. The radiation is mainly in the blue end of the visible spectrum and in the ultraviolet.
Cerenkov radiation may be observed around the core of a water-cooled nuclear research reactor of the swimming pool type. In this case, it is caused by high energy beta particles from the beta decay of fission products in the fuel rods.
Cerenkov detectors, operating in a similar way to scintillation detectors, can be used to detect high energy beta particles. | http://www.thermopedia.com/content/621/ |
4.125 | February 12, 2016,
Scoliosis affects about 2 - 3% of the population (about 6 million people in the United States). It can occur in adults but is more commonly diagnosed for the first time in children aged 10 - 15 years. About 10% of adolescents have some degree of scoliosis, but less than 1% of them develop scoliosis that requires treatment. The condition also tends to run in families. Among persons with relatives who have scoliosis, about 20% develop the condition.
Scoliosis that is not linked to any physical impairment, as well as scoliosis linked to a number of spine problems, may be seen in the adult population as well.
Vertebrae. The spine is a column of small bones, or vertebrae, that support the entire upper body. The column is grouped into three sections of vertebrae:
- Cervical (C) vertebrae are the 7 spinal bones that support the neck.
- Thoracic (T) vertebrae are the 12 spinal bones that connect to the rib cage.
- Lumbar (L) vertebrae are the 5 lowest and largest bones of the spinal column. Most of the body's weight and stress falls on the lumbar vertebrae.
Each vertebra can be designated by using a letter and number; the letter reflects the region (C=cervical, T=thoracic, and L=lumbar), and the number signifies its location within that region. For example, C4 is the fourth bone down in the cervical region, and T8 is the eighth thoracic vertebra.
Below the lumbar region is the sacrum, a shield-shaped bony structure that connects with the pelvis at the sacroiliac joints. At the end of the sacrum are 2 - 4 tiny, partially fused vertebrae known as the coccyx or "tail bone."
The Spinal Column and its Curves. Altogether, the vertebrae form the spinal column. In the upper trunk the column normally has a gentle outward curve (kyphosis) while the lower back has a reverse inward curve (lordosis).
The Disks. Vertebrae in the spinal column are separated from each other by small cushions of cartilage known as intervertebral disks. Inside each disk is a jelly-like substance called the nucleus pulposus, which is surrounded by a tough, fibrous ring called the annulus fibrosis. The disk is 80% water. This structure makes the disk both elastic and strong. The disks have no blood supply of their own, relying instead on nearby blood vessels to keep them nourished.
Processes. Each vertebra in the spine has a number of bony projections, known as processes. The spinous and transverse processes attach to the muscles in the back and act like little levers, allowing the spine to twist or bend. The particular processes form the joints between the vertebrae themselves, meeting together and interlocking at the zygapophysial joints (more commonly known as facet or z joints).
Spinal Canal. Each vertebra and its processes surround and protect an arch-shaped central opening. These arches, aligned to run down the spine, form the spinal canal, which encloses the spinal cord, the central trunk of nerves that connects the brain with the rest of the body.
Scoliosis is an abnormal curving of the spine. The normal spine has gentle natural curves that round the shoulders and make the lower back curve inward. Scoliosis typically causes deformities of the spinal column and rib cage. In scoliosis, the spine curves from side-to-side to varying degrees, and some of the spinal bones may rotate slightly, making the hips or shoulders appear uneven. It may develop in the following way:
- As a single primary side-to-side curve (resembling the letter C), or
- As two curves (a primary curve along with a compensating secondary curve that forms an S shape)
Scoliosis most commonly develops in the area between the upper back (the thoracic area) and lower back (lumbar area). It may also occur only in the upper or lower back. The doctor attempts to define scoliosis by the following characteristics:
- The shape of the curve
- Its location
- Its direction
- Its magnitude
- Its causes, if possible
The severity of scoliosis is determined by the extent of the spinal curve and the angle of the trunk rotation (ATR). It is usually measured in degrees. Curves of less than 20 degrees are considered mild and account for 80% of scoliosis cases. Curves that progress beyond 20 degrees need medical attention. Such attention, however, usually involves periodic monitoring to make sure the condition is not becoming worse.
Defining Scoliosis by the Shape of the Curve
Scoliosis is often categorized by the shape of the curve, usually as either structural or nonstructural.
- Structural scoliosis: In addition to the spine curving from side to side, the vertebrae rotate, twisting the spine. As it twists, one side of the rib cage is pushed outward so that the spaces between the ribs widen and the shoulder blade protrudes (producing a rib-cage deformity, or hump). The other half of the rib cage is twisted inward, compressing the ribs.
- Nonstructural scoliosis: The curve does not twist but is a simple side-to-side curve.
Other abnormalities of the spine that may occur alone or in combination with scoliosis include hyperkyphosis (an abnormal exaggeration in the backward rounding of the upper spine) and hyperlordosis (an exaggerated forward curving of the lower spine, also called swayback).
Defining Scoliosis by Its Location
The location of a structural curve is defined by the location of the apical vertebra. This is the bone at the highest point (the apex) in the spinal hump. This particular vertebra also undergoes the most severe rotation during the disease process.
Defining Scoliosis by Its Direction
The direction of the curve in structural scoliosis is determined by whether the convex (rounded) side of the curve bends to the right or left. For example, a doctor will diagnose a patient as having right thoracic scoliosis if the apical vertebra is in the thoracic (upper back) region of the spine, and the curve bends to the right.
Defining Scoliosis by Its Magnitude
The magnitude of the curve is determined by taking measurements of the length and angle of the curve on an x-ray view.
MOST POPULAR - HEALTH
- Disparity in Life Spans of the Rich and the Poor Is Growing
- Well: How the ‘Dirt Cure’ Can Make for Healthier Families
- Well: Why We Get Running Injuries (and How to Prevent Them)
- The New Old Age: In Palliative Care, Comfort Is the Top Priority
- Education May Cut Dementia Risk, Study Finds
- Well: Ask Well: Are Pomegranates Good For You?
- Well: To Reduce the Risk of Alzheimer’s, Eat Fish
- Well: Simple Remedies for Constipation
- Assisted Suicide Study Questions Its Use for Mentally Ill
- Keeping Dr. Paul Kalanithi’s Voice Alive | http://www.nytimes.com/health/guides/disease/scoliosis/in-depth-report.html |
4 | Annexation (Latin ad, to, and nexus, joining) is the political transition of land from the control of one entity to another. In international law it is the forcible transition of one state's territory by another state or the legal process by which a city acquires land. Usually, it is implied that the territory and population being annexed is the smaller, more peripheral, and weaker of the two merging entities, barring physical size. It can also imply a certain measure of coercion, expansionism or unilateralism on the part of the stronger of the merging entities. Because of this, more positive euphemisms like political union/unification or reunification are sometimes seen in discourse. Annexation differs from cession and amalgamation, because unlike cession where territory is given or sold through treaty, or amalgamation (where the authorities of both sides are asked if they agree with the merge), annexation is a unilateral act where territory is seized and held by one state and legitimized via general recognition by the other international bodies (i.e. countries and intergovernmental organisations).
During World War II, the use of annexation deprived whole populations of the safeguards provided by international laws governing military occupations. The authors of the Fourth Geneva Convention made a point of "giving these rules an absolute character", thus making it much more difficult for a state to bypass international law through the use of annexation.
- 1 International law after 1949
- 2 Examples since 1950
- 3 Subnational annexation
- 4 See also
- 5 References
- 6 Further reading
International law after 1949
The Fourth Geneva Convention (GCIV) of 1949 amplified the Hague Conventions of 1899 and 1907 with respect to the question of the protection of civilians. GCIV also emphasised the United Nations Charter: the United Nations Charter (June 26, 1945) had prohibited war of aggression (See articles 1.1, 2.3, 2.4) and GCIV Article 47, the first paragraph in Section III: Occupied territories, restricted the effects of annexation on the rights of persons within those territories:
Protected persons who are in occupied territory shall not be deprived, in any case or in any manner whatsoever, of the benefits of the present Convention by any change introduced, as the result of the occupation of a territory, into the institutions or government of the said territory, nor by any agreement concluded between the authorities of the occupied territories and the Occupying Power, nor by any annexation by the latter of the whole or part of the occupied territory.
Individual or mass forcible transfers, as well as deportations of protected persons from occupied territory to the territory of the Occupying Power or to that of any other country, occupied or not, are prohibited, regardless of their motive. ... The Occupying Power shall not deport or transfer parts of its own civilian population into the territory it occupies.
Protocol I (1977): "Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts" has additional articles which cover military occupation, but many countries including the United States are not party to this additional protocol.
Examples since 1950
||This article contains embedded lists that may be poorly defined, unverified or indiscriminate. (October 2015)|
Tibetan representatives in Beijing and the People's Republic of China signed the Seventeen Point Agreement on 23 May 1951, authorizing the PLA presence in Tibet. The Central Tibetan Administration considers it invalid and as having been signed under duress.
In 1954, the residents of Dadra and Nagar Haveli, a Portuguese enclave within India, ended Portuguese rule with the help of nationalist volunteers. From 1954 to 1961, the territory enjoyed de facto independence. In 1961, the territory was merged with India after its government signed an agreement with the Indian government.
In 1961, India and Portugal engaged in a brief military conflict over Portuguese-controlled Goa and Daman and Diu. India invaded and conquered the areas after 36 hours of fighting, ending 451 years of Portuguese colonial rule in India. The action was viewed in India as a liberation of historically Indian territory; in Portugal, however, the loss of both enclaves was seen as a national tragedy. A condemnation of the action by the United Nations Security Council (UNSC) was vetoed by the Soviet Union. Goa and Daman and Diu were incorporated into India.
In 1947, a popular vote rejected Sikkim's joining the Union of India, and Prime Minister Jawaharlal Nehru agreed to a special protectorate status for the kingdom. Sikkim came under the suzerainty of India, which controlled its external affairs, defence, diplomacy and communications, but otherwise retained autonomy. A state council was established in 1955 to allow for constitutional government under the Sikkimese monarch called the Chogyal. Meanwhile, trouble was brewing in the state after the Sikkim National Congress demanded fresh elections and greater representation for the Nepalese. In 1973, riots in front of the palace led to a formal request for protection from India. The Chogyal was proving to be extremely unpopular with the people. In 1975, the Kazi (prime minister) appealed to the Indian Parliament for a change in Sikkim's status so that it could become a state of India. In April, the Indian Army moved into Sikkim, seizing the city of Gangtok and disarming the Palace Guards. A referendum was held in which 97.5% of the voting people (59% of the people entitled to vote) voted to join the Indian Union. A few weeks later, on May 16, 1975, Sikkim officially became the 22nd state of the Indian Union and the monarchy was abolished.
In 1954, former British Ogaden (a Somali Region) was annexed by Abyssinia. Somali nationalists have waged wars of liberation since 1954. Currently, the Ogaden National Liberation Front (ONLF) leads this nationalist effort and is engaged in a fierce military confrontation with Ethiopia, the new name of Abyssinia.
On 18 September 1955 at precisely 10:16 am, in what would be the final territorial expansion of the British Empire, Rockall was declared officially annexed by the British Crown when Lieutenant-Commander Desmond Scott RN, Sergeant Brian Peel RM, Corporal AA Fraser RM, and James Fisher (a civilian naturalist and former Royal Marine), were deposited on the island by a Royal Navy helicopter from HMS Vidal (coincidentally named after the man who first charted the island). The team cemented in a brass plaque on Hall's Ledge and hoisted the Union Flag to stake the UK's claim. However, any effect of this annexation on valuable maritime rights claims under UNCLOS in the waters beyond 12 nautical miles from Rockall are neither claimed by Britain nor recognised by Denmark (for the Faroe Islands), the Republic of Ireland or Iceland.
Western New Guinea
West Papua or Western New Guinea was annexed by Indonesia in 1969 and is the western half of the island of New Guinea and smaller islands to its west. The separatist Free Papua Movement (OPM) has engaged in a small-scale yet bloody conflict with the Indonesian military since the 1960s.
Following an Indonesian invasion in 1975, East Timor was annexed by Indonesia and was known as Timor Timur. It was regarded by Indonesia as the country's 27th province, but this was never recognised by the United Nations. The people of East Timor resisted Indonesian forces in a prolonged guerrilla campaign.
Following a referendum held in 1999 under a UN-sponsored agreement between the two sides, the people of East Timor rejected the offer of autonomy within Indonesia. East Timor achieved independence in 2002 and is now officially known as Timor-Leste.
In 1975, and following the Madrid Accords between Morocco, Mauritania and Spain, the latter withdrew from the territory and ceded the administration to Morocco and Mauritania. This was challenged by an independentist movement, the Polisario Front that waged a guerrilla war against both Morocco and Mauritania. In 1979, and after a military putsch, Mauritania withdrew from the territory which left it controlled by Morocco. A United Nations peace process was initiated in 1991, but it has been stalled, and as of mid-2012, the UN is holding direct negotiations between Morocco and the Polisario front to reach a solution to the conflict.
The part of former Mandatory Palestine occupied by Jordan during the 1948 Arab–Israeli War, which some Jews call "Judea and Samaria", was renamed "the West Bank". It was annexed to Jordan in 1950 at the request of a Palestinian delegation. It had been questioned, however, how representative that delegation was, and at the insistence of the Arab League Jordan was considered a trustee only. Although only the United Kingdom and Pakistan recognized the annexation by Jordan, the British did not consider it sovereign to Jordan. It was not condemned by the UNSC and it remained under Jordanian rule until 1967 when it was occupied by Israel. Jordan did not officially relinquish its claim to rule the West Bank until 1988. Israel has not taken the step of annexing the territory (except for parts of it that was made part of the Jerusalem Municipality), rather, there were enacted a complex (and highly controversial) system of military government decrees in effect applying Israeli law in many spheres to Israeli settlements.
During the 1967 Six-Day War, Israel occupied East Jerusalem, a part of the West Bank, from Jordan. On June 27, 1967, Israel unilaterally extended its law and jurisdiction to East Jerusalem and some of the surrounding area, incorporating about 70 square kilometers of territory into the Jerusalem Municipality. Although at the time Israel informed the United Nations that its measures constituted administrative and municipal integration rather than annexation, later rulings by the Israeli Supreme Court indicated that East Jerusalem had become part of Israel. In 1980, Israel passed the Jerusalem Law as part of its Basic Law, which declared Jerusalem the "complete and united" capital of Israel. In other words, Israel purported to annex East Jerusalem. The annexation was declared null and void by UNSC Resolutions 252, 267, 271, 298, 465, 476 and 478.
Jewish neighborhoods have since been built in East Jerusalem, and Israeli Jews have since also settled in Arab neighborhoods there, though some Jews may have returned from their 1948 expulsion after the Battle for Jerusalem (1948).
No countries recognized Israel's annexation of East Jerusalem, except Costa Rica, and those who maintained embassies in Israel did not move them to Jerusalem. The United States Congress has passed the Jerusalem Embassy Act, which recognizes Jerusalem as the united capital of Israel and requires the relocation of the U.S. embassy there, but the bill has been waived by presidents Clinton, Bush, and Obama on national security grounds.
Israel occupied two-thirds of the Golan Heights from Syria during the 1967 Six-Day War, and subsequently built Jewish settlements in the area. In 1981, Israel passed the Golan Heights Law, which extended Israeli "law, jurisdiction, and administration" to the area, including the Shebaa farms area. This declaration was declared "null and void and without international legal effect" by UNSC Resolution 497. The only state that recognized the annexation is the Federated States of Micronesia.
The vast majority of Syrian Druze in Majdal Shams, the largest Syrian village in the Golan, have held onto their Syrian passports. When Israel annexed the Golan Heights in 1981, 95% of the native Syrians refused Israeli citizenship, and are still firmly of that opinion, in spite of the Syrian Civil War.
On 29 November 2012, the United Nations General Assembly reaffirmed it was "[d]eeply concerned that Israel has not withdrawn from the Syrian Golan, which has been under occupation since 1967, contrary to the relevant Security Council and General Assembly resolutions," and "[s]tress[ed] the illegality of the Israeli settlement construction and other activities in the occupied Syrian Golan since 1967." The General Assembly then voted by majority, 110 in favour to 6 against (Canada, Israel, Marshall Islands, Federated States of Micronesia, Palau, United States), with 59 abstentions, to demand a full Israeli withdrawal from the Syrian Golan Heights.
After being allied with Iraq during the Iran–Iraq War (largely due to desiring Iraqi protection from Iran), Kuwait was invaded and declared annexed by Iraq (under Saddam Hussein) in August 1990. Hussein's primary justifications included a charge that Kuwaiti territory was in fact an Iraqi province, and that annexation was retaliation for "economic warfare" Kuwait had waged through slant drilling into Iraq's oil supplies. The monarchy was deposed after annexation, and an Iraqi governor installed.
United States president George H. W. Bush ultimately condemned Iraq's actions, and moved to drive out Iraqi forces. Authorized by the UNSC, an American-led coalition of 34 nations fought the Gulf War to reinstate the Kuwaiti Emir. Iraq's invasion (and annexation) was deemed illegal and Kuwait remains an independent nation today.
In March 2014, Russia annexed most of the Crimean Peninsula, at that time part of Ukraine, and administers the territory as two federal subjects — the Republic of Crimea and the federal city of Sevastopol. Russia rejects the view that this was an annexation and regard it as an accession to the Russian Federation of a state that had just declared independence from Ukraine following a referendum.
Within countries that are subdivided noncontiguously, annexation can also take place whereby a lower-tier subdivision can annex territory under the jurisdiction of a higher-tier subdivision. An example of this is in the United States, where incorporated cities and towns often expand their boundaries by annexing unincorporated land adjacent to them. Municipalities can also annex or be annexed by other municipalities, though this is less common in the United States. Laws governing the ability and the extent cities can expand in this fashion are defined by the individual states' constitutions.
Annexation of neighbouring communities occurs in Canada. The city of Calgary, for example, has in the past annexed the communities of Bridgeland, Riverside, Sunnyside, Hillhurst, Hunter, Hubalta, Ogden, Forest Lawn, Midnapore, Shepard, Montgomery, and Bowness.
- Hofmann, Rainer (February 2013). "Annexation". Max Planck Encyclopedia of Public International Law. Oxford University Press.
- Rabin, Jack (2003). Encyclopedia of Public Administration and Public Policy: A-J. CRC Press. pp. 47–. ISBN 9780824709464. Retrieved 30 October 2015.
- One or more of the preceding sentences incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Annexation". Encyclopædia Britannica (11th ed.). Cambridge University Press.
- "Annexation". Encyclopædia Britannica. Encyclopædia Britannica Online. Retrieved 20 March 2014.
- Convention (IV) relative to the Protection of Civilian Persons in Time of War. Geneva, 12 August 1949.Commentary on Part III : Status and treatment of protected persons #Section III : Occupied territories Art. 47 by the ICRC
- "Convention (IV) relative to the Protection of Civilian Persons in Time of War. Geneva, 12 August 1949. Commentary - Art. 47. Part III : Status and treatment of protected persons #Section III : Occupied territories". ICRC. Retrieved 20 March 2014.
it was obvious that they were in fact always subservient to the will of the Occupying Power. Such practices were incompatible with the traditional concept of occupation (as defined in Article 43 of the Hague Regulations of 1907)
- Convention (IV) relative to the Protection of Civilian Persons in Time of War. Geneva, 12 August 1949.Commentary on Part III : Status and treatment of protected persons #Section III : Occupied territories Art. 49 by the ICRC
- Goldstein, Melvyn C. A History of Modern Tibet, 1913-1951: The Demise of the Lamaist State (1989) University of California Press. p.47. ISBN 978-0-520-06140-8
- Powers, John. History as Propaganda: Tibetan Exiles versus the People's Republic of China (2004) Oxford University Press. pp. 116–7. ISBN 978-0-19-517426-7
- "The United Nations Security Council S/5033". www.un.org. Retrieved 28 March 2014.
- BBC staff. "On this day: 21 September 1955: Britain claims Rockall". BBC. Retrieved March 2012.
- Report claims secret genocide in Indonesia – University of Sydney
- Archived October 2, 2011 at the Wayback Machine
- Arab League Session: 12-II Date: May 1950
- Marshall J. Berger, Ora Ahimeir (2002). Jerusalem: a city and its future. https://books.google.com/books?id=FGOY5oDGGLUC&pg=PA145: Syracuse University Press. p. 145. ISBN 978-0-8156-2912-2.
- Romano, Amy (2003). A Historical Atlas of Jordan. The Rosen Publishing Group. p. 51. ISBN 978-0-8239-3980-0.
- Sela, Avraham. "Jerusalem." The Continuum Political Encyclopedia of the Middle East. Ed. Avraham Sela. New York: Continuum, 2002. pp. 391-498.
- Frank, Mitch. Understanding the Holy Land: Answering Questions about the Israeli-Palestinian Conflict. New York: Viking, 2005. p. 74.
- "A/35/508-S/14207 of 8 October 1980." UNISPAL - United Nations Information System on the Question of Palestine. 8 October 1980. 8 June 2008
- UNSC Resolutions referred to in UNSC res 476 - 252, 267, 271, 298, 465
- UNSC res 478
- Lustick, Ian S. (16 January 1997). "Has Israel Annexed East Jerusalem?". Middle East Policy Council Journal 5 (1). doi:10.1111/j.1475-4967.1997.tb00247.x.
- "Syria: 'We still feel Syrian,' say Druze of Golan Heights".
- "UN Doc A/67/L.24".
- "Putin signs laws on reunification of Republic of Crimea and Sevastopol with Russia". ITAR TASS. 21 March 2014. Retrieved March 21, 2014.
- "Annexation Policies and Urban Growth Management in Calgary." Tim Creelman. Accessed December 17, 2009.
- History of Annexation. City of Calgary. Accessed December 17, 2009.
|Look up annexation in Wiktionary, the free dictionary.|
- Adam Roberts. Transformative military occupation: applying the laws of war and human rights, 100 The American Journal of International Law. vol 100 pp. 580–622 (2006) | https://en.wikipedia.org/wiki/Annexed |
4.4375 | If you like us, please share us on social media, tell your friends, tell your professor or consider building or adopting a Wikitext for your course.
Most reactions involving neutral molecules cannot take place at all until they have acquired the energy needed to stretch, bend, or otherwise distort one or more bonds. This critical energy is known as the activation energy of the reaction. Activation energy diagrams of the kind shown below plot the total energy input to a reaction system as it proceeds from reactants to products.
In examining such diagrams, take special note of the following:
Activation energy diagrams can describe both exothermic and endothermic reactions:
The activation energies of the forward reactions can be large, small, or zero (independently, of course, of the value of ΔH):
Processes with zero activation energy most commonly involve the combination of oppositely-charged ions or the pairing up of electrons in free radicals, as in the dimerization of nitric oxide (which is an odd-electron molecule).
In the plot below for the dissociation of bromine, Ea is the enthalpy of atomization in the following reaction:
Br2(g) → 2 Br· (g)
The reaction coordinate corresponds roughly to the stretching of the vibrationally-excited bond.
The hypothetical activated complex is the last, longest "stretch". The reverse reaction, the recombination of two radicals, occurs immediately on contact.
In most cases, the activation energy is supplied by thermal energy, either through intermolecular collisions or (in the case of thermal dissociation) by thermal excitation of a bond-stretching vibration to a sufficiently high quantum level.
As products are formed, the activation energy is returned in the form of vibrational energy which is quickly degraded to heat. It is worth noting, however, that other sources of activation energy are sometimes applicable:
A potential-energy profile is a diagram used to describe the mechanism of a reaction. This diagram is used to better illustrate the concepts of activation energy and the arrhenius equation, as well as to show the changing potential energy between the reactant and product that occur during a chemical reaction.
For a chemical reaction to occur, there must be a contact (collision) between the reactants. For further details about the reaction rate, see the Definition of A Reaction Rate. As the reaction proceeds, the potential energy rises to a maximum and the reactants form a cluster of atoms, called the activated complex. The highest point on the diagram is the activation energy, Ea, the energy barrier that must be overcome for a reaction to occur. Beyond the maximum, the potential energy decreases as the atoms rearrange in the cluster, until it reaches a constant state of energy. Finally, the products are formed.
The direction of a reversible reaction is determined by the transition state (also known as activated complex). There is a possibility that a collision between reactant molecules may not form products. The outcome depends on the factors mentioned in the transition state theory. If the activated complex can pass the barriers, the product forms. Otherwise, the complex falls apart and reverts to the reactants.
The graph above is an example of an elementary reaction, a single step chemical reaction with a single transition state. It does not matter whether there are one or more reactants or products. A combination of multiple elementary reactions is called a stepwise reaction. The potential energy diagram of this type of reaction involves one or more reaction intermediates. An intermediate is a species that is the product of one step of a reaction and the reactant for the next step.
The rate law of a stepwise reaction is complicated compared with that of an elementary reaction. However, there is only one slow step, the rate-determining step in the reaction. The rate-determining step controls the overall rate of the reaction, as the overall rate cannot proceed any faster than this rate-limiting step. In the potential energy profile, the rate-determining step is the reaction step with the highest energy of transition state (See: Transition State Theory). The diagram below is an example of a stepwise reaction.
For the diagram above, the rate-determining step is the first reaction as the first transition state is higher than the second one. There is one intermediate in this reaction.
For the potential energy profile for a catalyzed reaction, see the articles on activation energy and Gibbs free energy. Note that catalysts increase the reaction rate by decreasing the activation energy of the reaction but do not affect the potential energy of the reactants and products.
The concepts of potential energy and Gibbs free energy are related to each other: Gibbs free energy, G0, is actually a chemical potential energy. Both quantities can be used to measure how far the reaction is from equilibrium. Here are two types of potential energy profiles based on the free energy:
1. Endergonic reaction
2. Exergonic reaction | http://chemwiki.ucdavis.edu/Physical_Chemistry/Kinetics/Modeling_Reaction_Kinetics/Reaction_Profiles?bc=0 |
4.0625 | How to define arc length and how it is different from arc measure; how to calculate the length of an arc.
How to define a central angle and find the measure of its intercepted arc; how to describe the intercepted arcs of congruent chords.
How to define a secant; how to find the length of various secants in circles.
How to prove that an angle inscribed in a semicircle is a right angle; how to solve for arcs and angles formed by a chord drawn to a point of tangency.
How to define the components and length of a position vector in three dimensions.
How to calculate the measure of an inscribed angle.
How to define a circle and identify its main parts.
How to determine the triangle side inequalities.
How to understand and find the length of waves.
The RFLP method of DNA Fingerprinting.
How to derive the formula to calculate the area of a sector in a circle.
How to determine if two triangles in a circle are similar and how to prove that three similar triangles exist in a right triangle with an altitude.
How to classify a triangle based on its side lengths and angle measures.
How to define the midsegment of a trapezoid, calculate its length, and relate it to a triangle midsegment.
How to find the length of tangent segments drawn to a circle from the same point.
How to relate the sides of an angle bisected and the lengths of the opposite side.
How to convert between length and area ratios of similar polygons
How to prove that opposite angles in a cyclic quadrilateral are congruent; how to prove that parallel lines create congruent arcs in a circle.
How to express a position vector algebraically in component form <a,b>, and how to compute its magnitude.
How to express a vector algebraically in terms of the unit vectors i and j. | https://www.brightstorm.com/tag/length-of-an-arc/ |
4.0625 | Climate Change: Incoming Sunlight
Scientists sometimes describe Earth's climate as if it were a machine - a complex system with different cycles that move energy and matter around the planet. For example, the climate system has a water cycle, a carbon cycle and an energy cycle. In this analogy, the Sun is the main source of power for the machine, exceeding the next largest source by almost 10,000 times.
The Sun's rays warm our world, stir air and ocean currents, and catalyze chemical reactions in the atmosphere. The Sun-warmed surface evaporates water to form rainclouds that redistribute fresh water around the world. And sunlight is essential for most life forms that live at Earth's surface. Along with heating Earth, the Sun provides energy directly to plants through photosynthesis, and indirectly to animals and organisms that eat plants.
If Earth had no atmosphere and we had to rely upon the Sun's energy alone, Earth would be a frigid place. Its mean global temperature would be about 0°F. In comparison, if Earth was shrunk to the size of a basketball its atmosphere would be about as thick as a sheet of plastic wrap. Still, our relatively thin blanket of atmosphere is enough to dramatically slow the rate at which heat escapes to space. Specifically, heat-trapping gases in the atmosphere absorb and then re-radiate downward some of the heat given off by the surface and lower atmosphere. With this additional warming - known as the "greenhouse effect" - Earth's mean surface temperature is a comfortable 59°F.
Explore this interactive graph: Click and drag to display different parts of the graph. To squeeze or stretch the graph in either direction, hold the shift key on your keyboard as you click and drag. The graph shows the amount of sunlight entering the top of Earth's atmosphere from 1610 to 2010. Scientists call this quantity "total solar irradiance," shown here in Watts per square meter (W/m2). Space-based measurements, begun in 1978, indicate Earth receives an average of 1,361 W/m2 of incoming sunlight, an amount that has varied in the recent past by about 1 W/m2 (or one-tenth of one percent) on roughly 11-year cycles. Data courtesy of Greg Kopp, Laboratory for Atmospheric and Space Physics, University of Colorado; and Judith Lean, Space Science Division, Naval Research Laboratory.
The climate system's sensitivity to incoming and outgoing radiation is why scientists are so keenly interested in measuring how much energy comes from the Sun on an ongoing basis. Increases in the Sun's output are typically associated with times of higher solar activity when many small dark patches - sunspots - appear like freckles on the face of the Sun. "Small" is relative, of course, as many sunspots are larger than our entire planet! Sunspots are cooler than the surrounding solar surface (if you can call 7000°F "cool"!), making them appear dark. Though sunspots send less light toward Earth, they are typically surrounded by brighter areas, called faculae, which are a few percent brighter than the average Sun. Observations through several solar cycles reveal that the overall increase in brightness of faculae overpowers the sunspot darkening so that the combined effect of the two causes an increase of about 1 Watt per square meter in incoming sunlight. The 11-year cycle of slight brightening and dimming can be seen in the graph above.
Luckily for us, the amount of energy that the Sun sends to Earth's surface is relatively stable. But this amount of energy is so large that even small fluctuations in the Sun's output may cause significant climate change. For example, evidence suggests that the period of global cooling, known as the "Little Ice Age" (circa 1600-1850), may have been caused in part by a decrease in the Sun's energy output. During one 30-year stretch in the 1600s - the coldest period of the Little Ice Age when winter temperatures in Europe were from 1 to 1.5°C (1.8-2.7°F) colder than average - astronomers observed a total of only 50 sunspots, indicating a very quiet Sun. In contrast, the Sun has been more active in recent decades, displaying 160 sunspots or more in one 11-year cycle alone. Based on sunspot records and other proxy datasets, scientists believe that the Sun's energy output increased slightly between 1900 and 2000.
In 1978, scientists began making the space-based measurements of total solar irradiance needed to understand the Sun's influence on Earth's climate. Space-based measurements are crucial for measuring the Sun's signal undistorted by the thick soup of gases and particles in our atmosphere. Before 1978, the Sun's brightness was generally considered to be constant. Measurements obtained over the past 33 years have helped scientists characterize solar irradiance changes and resulting changes in Earth's temperature. While incoming sunlight may have increased slightly over the last century, this increase accounts for less than 10 percent of the warming our world experienced over that time. Thus, the increase in total solar irradiance alone cannot account for all of the global warming observed since 1900.
Scientists don't yet understand the full range of variance in energy output that the Sun is capable of. So it's crucial that scientists continue monitoring total solar irradiance as an important part NOAA's overall effort to advance scientific understanding of the Sun and Earth's climate system, and to provide beneficial services for society, such as early warnings whenever solar storms are directed at Earth.
A Primer on Space Weather. NOAA Space Weather Prediction Center. Accessed February 26, 2010.
Scott, Michon. 2009. Sunspots at Solar Maximum and Minimum. NASA Earth Observatory. Accessed March 20, 2009.
Lindsey, Rebecca. 2003. Under a Variable Sun. NASA Earth Observatory. Accessed March 17, 2009.
Kopp, G. and Lean, J.L. 2011. A New, Lower Value of Total Solar Irradiance: Evidence and Climate Significance, Geophys. Res. Letters, Frontier Articles. Vol 38, L01706, doi:10.1029/2010GL045777.
Lean, Judith L. 2010. Cycles and trends in solar irradiance and climate. Wiley Interdisciplinary Reviews: Climate Change. Vol 1, Issue 1. pp 111-122. Dec 22, 2009. doi:10.1002/wcc.018.
Lean, Judith L. and David H. Rind. 2009. How will Earth's surface temperature change in future decades? Geophysical Research Letters. 36, L15708, doi:10.1029/2009GL038932.
Lean, Judith L. and David H. Rind. 2008. How natural and anthropogenic influences alter global and regional temperatures: 1889 to 2006. Geophysical Research Letters. 35, L18701, doi:10.1029/2008GL034864.
Kopp, G., Lawrence, G., and Rottman, G. 2005. "The Total Irradiance Monitor (TIM): Science Results," Solar Physics, 230, 1, pp. 129-140.
Muscheler, Raimund, F. Joos, S.A. Muller, I. Snowball. 2005. "How unusual is today's solar activity?" Nature, v436, pp. E3-E4. (With reply by Solanki et al.)
Wang, Y.-M., J.L. Lean, and N.R. Sheeley, Jr. 2005. "Modeling the Sun's Magnetic Field and Irradiance Since 1713," The Astrophysical Journal, v625, pp. 522-38. | https://www.climate.gov/news-features/understanding-climate/climate-change-incoming-sunlight |
4.0625 | This post was contributed by Mary Catherine of Fun-A-Day!
Make Early Learning Fun!
I’m thrilled that Mary Catherine of Fun-A-Day will be sharing two early learning activities with us each month. Judging by her blog highlighting all of the learning fun she does with her kids and her preschool class, I think we’re in for some great ideas. Let’s see what she has in store today…
Ideas for Early Learning with Beads
One of the best things about teaching young children is that you can use so many different materials to do so! My son and my preschool students have really been into exploring beads the past few months, so I thought I’d share various ways to incorporate early learning into bead play.
12 Ways Beads Promote Early Learning
- Sorting – Children can sort beads by color, shape, size, and type of bead. To extend this concept, they can be encouraged to create their own sorting categories.
- Pattern Making – Beads are great to use for learning about patterns! Kids can copy patterns, extend patterns, and even build their own patterns using any kind of beads you have on hand.
- Counting – Counting, comparing quantities, and associating amounts with numerals are all easily done with beads.
- Exploring Colors & Shapes – While “just playing” with beads, children are able to enhance their color and shape knowledge.
- Exploring the Senses – Beads placed in a sensory table with cups, spoons, and funnels can help children explore the concepts of measurement, volume, and weight. This is also a great way for them to use their senses of sight, hearing, and touch.
- Exploring Simple Science Concepts – Placing beads in a sealed container allows little ones to explore small items safely. For older children, add beads to liquids for further exploration of science concepts.
- Exploring Textures – When added to other sensory materials (like play dough or water), beads give children different textures to touch.
- Letter Learning – Using letter beads, children can learn more about letter names and sounds.
- Name Practice – Letter beads let them practice making their names, as well as names of friends and family.
- Beginning Phonics – Children can use letter beads to make words. They can also explore how to make new words from words they already know (i.e. if you take the ‘b’ off of ‘bed’ and replace it with an ‘r’ you make the new word ‘red’).
- Story-Telling – Using beads and pipe cleaners, children can make story bracelets (about both familiar and new stories). This helps them practice sequencing, comprehension skills, and retelling.
- Fine Motor Skill Practice – Using beads helps children develop fine motor skills, which is very important for writing.
This list illustrates just some of the ways you can use bead play to teach kiddos many different concepts. How have you used beads recently?
Mary Catherine is mama to a budding engineer whose favorite question is “why?” She is a pre-k teacher with a background in teaching kindergarten and a passion for early literacy. Mary Catherine loves lazy days with her son, messy art projects, science fiction books, and dark chocolate. You can find her blogging at Fun-A-Day! Come connect with Mary Catherine on Facebook, Twitter, or Pinterest.
I love the use of simple materials to promote play and early learning. I think I might dig out our bead collection for Priscilla today. Thanks for the inspiration, Mary Catherine!
More Kids Activities from Mary Catherine of Fun-A-Day!
Want to Be a More Creative Mama?
Get the latest inspiration - from kids crafts to learning fun to family recipes to creative parenting - right in your inbox each week! | http://b-inspiredmama.com/early-learning-with-beads/ |
4.03125 | PROVINCIAL NEW ENGLAND
WITHIN the framework of the British Empire, each colony or group of colonies had its own peculiar problems, its special customs and points of view. In the provincial America of the eighteenth century, New England had a peculiarly clean-cut sectional individuality which was recognized by friends and enemies alike. Radical politicians found it convenient to use New England precedents, while royal governors complained of the spread of "Boston principles" which threatened to undermine the foundations of imperial authority.
In the last decade of the seventeenth century, the settled area of New England was only a small fraction of that now occupied by this group of states. Vermont was still virgin soil, and Maine, then a part of Massachusetts, was scarcely less so; only three of its towns were thought important enough in 1694 to be listed for purposes of taxation, and these were all on the coast within thirty miles of the New Hampshire line. For practical purposes, New Hampshire meant as yet little more than its short ocean frontage and a back country hardly twenty-five miles deep. The upper Merrimac valley was still in dispute between Massachusetts and New Hampshire and actually occupied by neither. From the Merrimac southward and westward around the coast, the colonists were still nearly all within fifty miles of the sea, though a slender line of settlement went up the Connecticut River across Massachusetts, growing very thin at its northern end. Central Massachusetts, as well as the Berkshire country and the adjoining section
Settled area of New England about 1690. | https://www.questia.com/read/77776303/the-foundations-of-american-nationality |
4 | Table of Contents
Here's a link to some tutorials that can get you started on learning assembly.
Sigma's Learn Assembly in 28 days, day1.
For more information, read on.
What is Asm?
What exactly is the z80 assembly language? I'm sure many of you who have roamed through the ticalc.org archives, and have found an extensive library of z80 assembly programs. Before we learn how to program it, it is quite important to get an understanding of what assembly language is.
History of Computer Languages
In the beginning of programming, programmers had to write code by manually setting on/off switches. Depending on whether the switch was set on or off, the computer would then do something. Obviously this form of programming is extremely time intensive and not very practical. Why did the 1st computer designers choose only 2 different elementary data types? It turns out that before micro processors, computers were designed to interpret signals through vacuum tubes, so the only method of communication was whether these signals were coming or not coming (hence the on/off idea). But enough about the basic elements of programming, on to…
Not long after the development of microchips came the development of compilers. These were primitive programs that took user inputs (the code) and turned them into on/off data that the computer could use. The basis of assembly language is the use of primitive operations to accomplish tasks more quickly than with writing in machine language, as well as develop a language that resembles native language (what you speak and write). One of these languages, called z80, is the basis of programming for the Texas Instruments (TI) series of graphing calculators, the TI-73, -82, -83, -83+, -84+, -86, etc. However, one of the problems with asm programming is that it still lacked structured programming and syntax that more closely resembles native language. So, this led to…
Higher Level languages
The zenith of language design (at least for now), higher level languages provide a powerful mixture of structure as well as functionality. Examples of higher level languages include C, C++, Java, Pascal, and Visual Basic. Since higher level languages aren't included in the scope of this site, let's move on.
Assembly vs. TI-Basic
Why should anyone bother to learn assembly? How does assembly compare to TI-Basic? These questions will be answered in this section.
Advantages of Assembly over TI-Basic
- Speed. If programmed correctly, assembly code can run many times faster than TI-Basic. This is because assembly code directly sends instructions to the processor or hardware, allowing for direct response.
- Functionality. Because assembly can poll hardware and memory directly, it has far greater functionality than TI-Basic. This allows assembly programmers to do things that TI-Basic programmers could only dream of.
- Protection. Assembly programs can not normally be altered by a user. This prevents users from accidentally deleting crucial code from the program, as sometimes happens in TI-Basic programs. It also allows you to keep the source a secret if you ever wish to do so.
Disadvantages of Assembly over TI-Basic
- Size. For the most part, assembly programs might be larger than TI-Basic programs. This is because TI-Basic programs are composed of tokens that take up ~1 byte each. These tokens are parsed on runtime to perform functions pre-programmed into the TI-OS, saving space in the actual program. On the other hand, assembly programmers have to manually code advanced functions by themselves and sometimes will even re-write basic functions.
- Learning curve. Because of the complexity of assembly, it has a rather high learning curve. In order to truly understand how to program efficient assembly code takes a great deal of effort.
- Stability. Because assembly can change itself as well as anything else in memory, it is very unstable and very prone to crashes. Also, assembly programs do not have any way of error checking at runtime and you cannot normally break out of an assembly program. This results in having to reset the RAM many times. It is suggested that assembly programs be tested first on emulators before sending them to a real calculator.
It's almost time to start writing assembly code, but before you do, you will need some things.
- Computer. This will be necessary to actually write the code. I will assume you have access to a computer if you are reading this.
- Calculator. Necessary if you ever want to debug/test a program. Again, I will assume you have one.
- Calculator-Computer link cable. Necessary if you want to transfer data between your computer/calculator.
- Text Editor. You'll need this to write the code.
- Compiler and Linker. Necessary to get change your code from assembly into machine language, then into a file that can be run on your calculator.
Integrated Development Environments (IDE's)
IDE's are programs that help you program, compile/link, and debug your program. Although not necessary for programming, it is recommended that you become familiar with a good IDE so that you can take your programming to the next level. Generally IDE's have a graphical user interface (GUI) that will allow you to perform tasks without having to type commands in. For more information on IDE's, see here.
If you are using an IDE, follow its instructions on setting it up and skip the rest of this section or if you prefer to use the more common TASM setup, read on.
For those who want it, I've set up TASM for you. If you want, click here to get the zip file containing all the stuff you'll need. Just pick a folder to extract it to and you can skip the rest of this page. Or, if you want, read the rest just to get some basic information on setting up TASM.
Step 1: getting the necessary files
Here's a list of links for things you'll need:
TI83 Plus include file
Download all of them and put them all into a folder you'll use for all of your asm programming stuff (Example: "C:\Asm\")
Step 2: setting up the folder structure
Create 3 folders inside of "C:\Asm\" name "tasm", "exec", and "source". As the name implies, tasm is where all of the compile/linking stuff goes. So, move the three zip files you downloaded into it and extract them. If you want, delete the extra files included with the last one or move them somewhere else if you want to keep them. Keep everything else.
Step 3: changing the assembly batch file
For time's sake, right click on "asm.bat" in the tasm directory and click on 'edit'. Then, remove everything and change it to this:
echo Syntax: asm [NAME (w/o extension)] [PATH] @echo off echo ----- Assembling %1 for the TI-83 Plus... echo #define TI83P >temp.z80 cd "..\source" if exist %1.z80 type %1.z80 >>temp.z80 if exist %1.asm type %1.asm >>temp.z80 move /y temp.z80 "../tasm" cd "..\tasm" tasm -80 -i -b -l temp.z80 %1.bin %2%1.xlt if errorlevel 1 goto ERRORS devpac8x %1 copy %1.8xp %2%1.8xp >nul echo TI-83 Plus version is %1.8xp move %1.8xp "..\exec" goto DONE :ERRORS echo ----- There were errors. :DONE del temp.z80 >nul del %1.bin >nul del %1.xlt >nul
save and close "asm.bat".
To compile your code, open up command prompt (start, run… command prompt) and navigate to the location where you extracted the zip file.
|Command Prompt instruction||What it does|
|cd "directory"||Look for folder "directory" (without quotes) and open it|
|cd "directory1/directory2"||jump to directory2 inside of directory1|
|cd ..||Jump to the "parent" folder|
|C:, Z:, etc.||Jump to a different hard disk|
|dir||list all folders and files in the current folder you're in|
Once you get to "../asm/tasm", type in "asm sourcename". Sourcename is the name of your source file, which you put in "../asm/source". Don't include the ".z80" at the end.
If there were no errors during compilation, your program should be ready for you in the "..asm/exec" folder. Just send it to your calculator and run "asm(prgmname)".
That's all there is to setting it up. If you want to actually learn the z80 language, see the other topics in this section to get a glimpse of what it can do. | http://z80-heaven.wikidot.com/getting-started-with-asm |
4.09375 | Roundup of Recent Science Discoveries, 2006
—By Borgna Brunner
After a seven-year, three-billion-mile expedition through the solar system, NASA’s Stardust spacecraft capsule landed in the Arizona desert on Jan. 15, 2006, with an impressive bounty: a canister full of tens of thousands of comet particles and a smattering of interstellar dust, the first such samples ever collected.
Stardust captured the comet particles from Comet Wild-2 (pronounced Vilt) when the spacecraft flew within 149 miles of it on Jan. 2, 2004, roughly in the vicinity of Jupiter. The spacecraft’s collector swept up particles left in the comet’s wake, preserving them in a silicon material called aerogel, which cushioned and protected the particles on their long journey to Earth.
The Stardust samples offer a time capsule to our primordial past. Comets contain some of the oldest material in the solar system, formed out of the remaining dust and gases left over after the solar system’s creation 4.6 billion years ago. Principal investigator Donald Brownlee has commented that “this has been a fantastic opportunity to collect the most primitive material in the solar system. We fully expect some of the comet particles to be older than the Sun.” Michael Zolensky, another Stardust scientist, offers a vivid sense of how intimately connected comets are to an understanding of life on Earth: “It’s like looking at our great-great grandparents. Much of Earth's water and organics—you know, the molecules in our bodies—perhaps came from comets. So these samples will tell us…basically, where our atoms and molecules came from, and how they were delivered to Earth, and in what amount.”
The samples have already defied their expectations. Some contain minerals that could only have been formed at enormously high temperatures. But comets are icy balls thought to have formed far from the Sun, in the outer, frigid regions of the solar system. Brownlee noted that “when these minerals formed they were either red-hot or white-hot grains, and yet we collected them at a comet [from] the Siberia of the solar system.” According to Zolensky, “It suggests that, if these are really from our own sun, they've been ejected out—ballistically out—all the way across the entire solar system and landed out there.…We can't give you all the answers right now. It's just great we have new mysteries to worry about now.”
About 150 scientists around the world are currently studying the comet samples, while an army of amateur scientists have turned their attention to the stardust also collected during the mission. About 65,000 volunteers from the general public will be enlisted to help find images of interstellar dust embedded in the aerogel. The volunteers of Stardust@home will search images delivered to them on the Internet, using so-called virtual microscopes. While the comet dust is visible to the naked eye, the bits of stardust are only a few microns in diameter. Just 40 to 100 grains of stardust are thought to have been collected, and searching for these particles has been described by the NASA team like “tracking down 45 ants on a football field.”
Neither Fish nor Tetrapod
Several times a year popular science articles breathlessly announce the discovery of a “missing link,” but in the case of the recently identified fossils of an odd creature hailing from the Canadian territory of Nunavut, this overused term is compellingly accurate. In an April 2006 issue of Nature, paleontologists revealed the discovery of a 375-million-year-old transitional species whose anatomical traits bridge the gap between fish and tetrapod (four-legged vertebrate). Nicknamed the fishapod, its formal name is Tiktaalik roseae, from the Inuit name for a large shallow-water fish.
Tiktaalik joins several other significant transitional fossils—the most famous of which is Archaeopteryx, the part-bird, part-reptile considered the “missing link” between birds and dinosaurs, which was discovered in 1860, just two years after Darwin published The Origin of Species.
The transformation of aquatic creatures into land animals took place during the Devonian period, about 410 to 356 million years ago. But before the discovery of the 375-million-year-old Tiktaalik fossils, there had been no actual fossil evidence to illustrate this crucial evolutionary moment. According to paleontologist Neil Shubin of the University of Chicago, “We are capturing a very significant transition at a key moment of time. What is significant about the animal is that it is a fossil that blurs the distinction between two forms of life—between an animal that lives in water and an animal that lives on land.”
Tiktaalik resembles a huge scaly fish with a flat, crocodile snout. What amazed scientists was its pectoral fins, which contain bones forming the beginnings of a shoulder, elbow, wrist, hand, and even nascent fingers. Shubin describes the fin as “basically a scale-covered arm,” asserting that “here’s a creature that has a fin that can do push-ups.” Tiktaalik could pull its own weight, dragging itself along in shallow water and onto dry land, much like a seal.
Tiktaalik also distinguishes itself from a fish by the existence of a primitive neck and ribs. As Harvard University paleontologist Farish A. Jenkins explains, “Out of water, these fish encountered gravitational forces very different from the relative buoyancy they enjoyed in an aquatic setting. Restructuring of the body to withstand these forces is evident in the ribs, which are plate-like and overlap like shingles, forming a rigid supporting mechanism for the trunk.” And while a fish has no need of a neck—in water, its entire torso easily falls into place behind its head when changing directions—Tiktaalik's developed neck allowed it to move its head while its body, constrained by the stronger pull of gravity on land, remained stationary. According to Edward Daeschler of the Academy of Natural Sciences, the combination of these radically new anatomical features with classic fishlike traits demonstrates that “evolution proceeds slowly…in a mosaic pattern with some elements changing while others stay the same.”
Journey to the Center of the Earth
For forty years, scientists have attempted to drill deep into the ocean’s crust, an enterprise promising significant insight into the planet’s geological history. In the 1950s, the highly ambitious Mohole project sought to drill seven miles into the sea floor, all the way through the extremely dense ocean crust to the planet’s middle layer, the mantle. But the project was abandoned in 1966 as exorbitantly expensive and impractical—no drill was up to the task, and the project’s deepest hole, after nearly a decade of research, planning, and drilling, reached just 601 feet into the ocean’s crust.
But in April 2006, a team of scientists reported success on a more modest ocean drilling project, which has yielded some impressive findings. Researchers involved in the Integrated Ocean Drilling Program (IODP) managed to drill nearly a mile into the ocean crust, collecting the first intact sample ever of all the crust’s various layers, including the very deepest layer, composed of igneous rock called gabbro. Gabbro has been discovered during other ocean drilling projects—in these cases, geological disturbances had shifted the gabbro closer to the ocean’s surface. But this is the first instance in which gabbro has been found in situ.
The drilling of Hole 1256D, as the 3,796-foot-deep bore hole is unceremoniously named, took place 400 miles west of Costa Rica in the Pacific Ocean, with the 120 scientists housed aboard the research ship Joides Resolution. The location was selected because the ocean crust is thinner there than in most parts of the world. The project began in 2002 and involved three separate voyages, nearly six months of drilling at sea, and twenty-five ten-inch-wide, state-of-the-art drill bits. The Integrated Ocean Drilling Program is an international marine program spearheaded by the U.S. and Japan, and involves 18 other countries. It's considered the world's largest earth science program.
Some scientists have dubbed this intact sample the holy grail to unlocking the secrets of the ocean crust, which in turn will bring broader revelations about our planet. According to Jeff Fox, the director of IODP, “The record of the earth’s history is written in greater clarity in the sediments of rocks on the sea floor than anywhere else.”
Million-Dollar Math Problem
In 2000, the Clay Mathematics Institute of Cambridge, Mass., identified seven math problems it deemed the most “important classic questions that have resisted solution over the years.” Several of them had in fact resisted solution for more than a century—the Riemann Hypothesis, for example, has confounded mathematicians since its formulation in 1859. To create a bit of frisson among the public for the so-called Millennium Prize Problems, the Clay Institute announced it would offer a one-million-dollar reward apiece for solutions to the problems. While a layperson might have a tough time penetrating the quantum physics behind the “Yang-Mills existence and mass gap problem,” they have no difficulty understanding the meaning of the number 1 followed by 6 zeros and preceded by a dollar sign.
Seven years after the Clay Institute announced its challenge, the century-old Poincaré Conjecture, one of the thorniest of the millennium problems, is thought to have been solved. Ever since French mathematician Henri Poincaré posed the conjecture in 1904, at least a half-dozen eminent mathematicians—and many lesser ones—have tried and failed to crack the problem. But a series of three papers on the conjecture posted online in 2002 and 2003 by Russian Grigory Perelman have successfully withstood intense scrutiny by the mathematical community for the past four years—twice the number of years of public examination required by the Clay Institute.
Poincaré's Conjecture deals with the branch of math called topology, which is the study of shapes, spaces, and surfaces. The Clay Institute offers this deceptively friendly-sounding doughnut-and-apple explication of the bedeviling problem:
If we stretch a rubber band around the surface of an apple, then we can shrink it down to a point by moving it slowly, without tearing it and without allowing it to leave the surface. On the other hand, if we imagine that the same rubber band has somehow been stretched in the appropriate direction around a doughnut, then there is no way of shrinking it to a point without breaking either the rubber band or the doughnut. We say the surface of the apple is ‘“simply connected,”’ but that the surface of the doughnut is not. Poincaré, almost a hundred years ago, knew that a two-dimensional sphere is essentially characterized by this property of simple connectivity, and asked the corresponding question for the three-dimensional sphere (the set of points in four-dimensional space at unit distance from the origin).
The resolution of Poincaré's Conjecture will have enormous implications for our understanding of relativity and the shape of space.
But while mathematicians are hailing this as potentially the biggest breakthrough since Andrew Wiles solved Fermat's Last Theorem in 1994, Grigory Perelman himself has taken a decidedly standoffish attitude to his accomplishment. He has shown no interest in collecting the million-dollar prize, and instead of publishing his solution in a “refereed mathematics publication of worldwide repute,” as the Clay Institute requires, he simply published his papers online. His proof never even mentions Poincaré by name and is presented in such a sketchy and elliptical fashion that it resembles guidelines for proving the conjecture more than an actual proof. In 2006, three groups of mathematicians published papers that fill in the gaps left by Perelman's unorthodox solution. Mathematicians disagree, however, as to whether any of these papers actually add substantially to solving of the conjecture or simply explicate Perelman's work.
On Aug. 22, the International Congress of Mathematicians awarded Perelman the enormously prestigious Fields medal for his solution to the Poincaré Conjecture as well as for other significant contributions. Perelman refused to attend the conference and rejected the award. Serge Rukshin, Perelman's former teacher, described him as “a devoted scientist in the pure sense of the word. He believes that the most important thing is that the problem is solved.” Perelman hasn't entirely ruled out the Millennium Prize, however, commenting that “I’m not going to decide whether to accept the prize until it is offered.” | http://www.infoplease.com/ipa/A0933909.html |
4.3125 | Fractions are comprised of a numerator and denominator, and when two fractions have the same number for a denominator, it is known as a common, or like, denominator. Adding fractions together when they have a common denominator is easy to do, because you can just add all the numerators together! The new fractions will use the same original denominator, so all you have to worry about is adding the numbers above the line. The same is true for subtracting fractions that have common denominators. Things get a little trickier when the fractions don’t have the same denominator, but they can still be added or subtracted by finding a common denominator first.
Adding Fractions With Common Denominators
1Recognize the numerator and denominator. There are two parts to all fractions: the numerator, which is the number above the line, and the denominator, which is the number below the line. Where the denominator tells you how many parts a whole has been broken into, the numerator tells you how many pieces of that whole there are.
- In the fraction ½, for instance, the numerator = 1 and the denominator = 2, and the fraction is one-half.
2Determine the denominator. When two or more fractions have a common denominators, it means they all have the same number as a denominator, or that they all represent wholes that have been broken into the same number of pieces. Fractions with a common denominator can be added together very easily, and the resulting fraction will have the same denominator as the original fractions. For instance:
- The fractions 3/5 and 2/5 have a common denominator of 5.
- The fractions 3/8, 5/8, and 17/8 have a common denominator of 8.
3Locate the numerators. To add fractions together when they have a common denominator, you simply add all the numerators together and rewrite the sum over the original denominator.
- In the fractions 3/5 and 2/5, the numerators are 3 and 2.
- In the fractions 3/8, 5/8, and 17/8, the numerators are 3, 5, and 17.
4Add the numerators. In the example of 3/5 + 2/5, add the numerators 3 + 2 = 5. In the example 3/8 + 5/8 + 17/8, add the numerators 3 + 5 + 17 = 25
5Rewrite the fraction with the new numerator. Remember to use the same common denominator, since the number of parts that the whole is divided into remains the same, and you are just adding the number of individual pieces.
- The fractions 3/5 + 2/5 = 5/5
- The fractions 3/8 + 5/8 + 17/8 = 25/8
6Solve the fraction if necessary. Sometimes a fraction can be put into simpler terms, and this includes dividing it to get a number that’s not a fraction or decimal. In the example 5/5, this fraction can be solved easily because any fraction where the numerator and denominator are the same will equal 1. Think about it like a pie that’s been cut into three pieces. If you eat all three pieces of the pie, then you’ve eaten one whole pie.
- Any fraction can be converted from a fraction by dividing the numerator by the denominator, and you’ll often end up with a decimal number. For instance, 5/8 can also be written as 5 ÷ 8, which equals 0.625.
7Reduce the fraction if you can. A fraction is said to be in its simplest form when both the numerator and the denominator don’t have any common factors they can be divided by.
- For instance, in the fraction 3/6, both the numerator and denominator have a common factor of 3, meaning they can both be divided by 3 to produce a whole number. Therefore, the fraction 3/6 can be thought of as 3 ÷ 3 / 6 ÷ 3 = ½.
8Convert improper fractions to mixed numbers if necessary. When a fraction has a numerator that’s bigger than the denominator, such as 25/8, this is said to be an improper fraction (the reverse, when the numerator is smaller than the denominator, is a proper fraction). These can be converted into a mixed number, which is a number that has a whole number plus a proper fraction. To convert an improper fraction like 25/8 to a mixed number, you:
- Divide the improper fraction’s numerator by its denominator to determine how many whole times 8 goes into 25, where the answer is 25 ÷ 8 = 3(.125)
- Determine what’s left over. If 8 x 3 = 24, subtract that from the original numerator: 25 – 24 = 1, where the difference is the new numerator.
- Rewrite the mixed number. The denominator will be the same from your original improper fraction, meaning 25/8 can be rewritten as 3 1/8.
Subtracting Fractions With Common Denominators
1Locate the numerators and denominators. For instance, look at the equation 12/26 – 4/26 – 1/26. In this example:
- The numerators are 12, 4, and 1
- The common denominator is 26
2Subtract the numerators. Like with addition, you don’t have to worry about doing anything to the denominator, so just find the difference between the numerators:
- 12 – 4 – 1 = 7
- Rewrite the fraction with the new numerator. 12/26 – 4/26 – 1/26 = 7/26.
3Reduce or solve the fraction if necessary. Similar to adding fractions, when you subtract fractions you can still end up with:
- An improper fraction that can be converted to a mixed number
- A fraction that can be solved through division
- A fraction that can be put into a simpler form by finding a common denominator
Finding a Common Denominator
1Locate the denominators. Fractions don’t always have the same denominators, and in order to add or subtract those fractions, you must first find a common denominator. To start, locate the denominators in the fractions you’re dealing with.
- For instance, in the equation 5/8 + 6/9, the denominators are 8 and 9.
2Determine the least common multiple. To find a common denominator, you need to find the least common multiple of the two numbers, which is the smallest positive number that’s a multiple of both original numbers. To find the least common multiple of 8 and 9, you must first go through the multiples of each number:
- The multiples of 8 are: 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, etc.
- The multiples of 9 are: 9, 18, 27, 36, 45, 54, 63, 72, 81, 90, 99, 108, etc.
- The least common multiple of 8 and 9 is 72.
3Multiply the fractions to achieve the least common multiple. Multiply each denominator by the correct number to achieve the common denominator. Remember that whatever you do to each denominator, you must also do to its numerator.
- For the fraction 5/8: to achieve the common denominator of 72, you multiply 8 x 9. Therefore, you must also multiply the numerator by 9, giving you 5 x 9 = 45
- For the fraction 6/9: to achieve the common denominator of 72, you multiply 9 x 8. Therefore, you must also multiply the numerator by 8, giving you 6 x 8 = 48
4Rewrite the fractions. The new fraction will have the common denominator and the product of the numerators multiplied by the same values:
- The fraction 5/8 becomes 45/72, and the fraction 6/9 becomes 48/72.
- Since they now have a common denominator, you can add the fractions 45/72 + 48/72 = 93/72.
- Don’t forget to reduce, solve, or convert improper fractions to mixed numbers when applicable and necessary.
Questions and Answers
Give us 3 minutes of knowledge!
Sources and Citations
- ↑ http://www.quickanddirtytips.com/education/math/what-are-numerators-and-denominators?page=1
- ↑ https://www.khanacademy.org/math/arithmetic/fractions/Adding_and_subtracting_fractions/e/adding_fractions_with_common_denominators
- ↑ http://www.mathgoodies.com/lessons/fractions/equivalent.html
- ↑ https://www.mathsisfun.com/definitions/common-factor.html
- ↑ https://www.youtube.com/watch?v=-imFslMIN1g
- ↑ https://www.mathsisfun.com/least-common-multiple.html
- ↑ https://www.khanacademy.org/math/pre-algebra/fractions-pre-alg/fractions-unlike-denom-pre-alg/v/subtracting-fractions-with-unlike-denominators | http://www.wikihow.com/Add-Fractions-With-Like-Denominators |
4.15625 | First Schleswig War
The First Schleswig War (German: Schleswig-Holsteinischer Krieg) or Three Years' War (Danish: Treårskrigen) was the first round of military conflict in southern Denmark and northern Germany rooted in the Schleswig-Holstein Question, contesting the issue of who should control the Duchies of Schleswig and Holstein. The war, which lasted from 1848 to 1851, also involved troops from Prussia and Sweden. Ultimately, the war resulted in a Danish victory. A second conflict, the Second Schleswig War, erupted in 1864.
At the beginning of 1848, Denmark contained the Duchy of Schleswig and controlled the duchies of Holstein and Saxe-Lauenburg in the German Confederation. These were where the majority of the ethnic Germans in Denmark lived. Germans made up a third of the country's population, and the three duchies were behind a half of Denmark's economic power. The Napoleonic Wars, which ended in 1815, had increased Danish and German nationalism. Pan-German ideology had become highly influential in the decades prior to the war outbreak and writers such as Jacob Grimm argued that the entire Peninsula of Jutland had been populated by Germans before the arrival of the Danes and that therefore it could justifiably be reclaimed by Germany. These claims were countered in pamphlets by Jens Jacob Asmussen Worsaae, an archaeologist who had excavated parts of Danevirke, who argued that there was no way of knowing the language of the earliest inhabitants of Danish territory, that Germans had more solid historical claims to large parts of France and England, and that Slavs by the same reasoning could annex parts of Eastern Germany.
The conflicting aims of Danish and German nationalists was a cause behind the First Schleswig War. Danish nationalists believed that Schleswig, but not Holstein, should be a part of Denmark, as Schleswig contained a large number of Danes, whilst Holstein did not. German nationalists believed that Schleswig, Holstein, and Lauenburg should remain united, and their belief that Schleswig and Holstein should not be separated led to the two duchies being referred to as Schleswig-Holstein. Schleswig was a particular source of contention, as it contained a large number of Danes, Germans and North Frisians. Another cause of the war was the illegal introduction of a royal law in the duchies.
When King Christian VIII of Denmark died in January 1848, and seeing that his only legitimate son, the future Frederick VII, was apparently unable to beget heirs the duchies could have gone under the rule of the House of Oldenburg,[clarification needed] which might have resulted in a division of Denmark. As a result, a royal law was decreed in the duchies that would allow a female relative of Christian VIII to assume control. The implementation of this law was illegal.
The Slesvig-Holsteiners, being inspired from the successes of the French in the Revolution at Paris of February 1848, sent a deputation to Copenhagen to demand the immediate recognition by King Frederick VII of a joint state of Slesvig-Holstein previous to its admittance into the German Confederation. King Frederick's reply, in which he admitted the right of Holstein as a German confederate state to be guided by the decrees of the Frankfort diet, but declared that he had neither "the power, right, nor wish" to incorporate Slesvig into the confederation, was immediately followed or even perhaps preceded by an outbreak of open rebellion.
Schleswig-Holsteinian Prince Frederik of Noer took the 5th "Lauenburger" Rifle Corps (Jägerkorps) and some students of Kiel university to take over the fortress of Rendsburg in Schleswig-Holstein. The fortress contained the main armoury of the duchies, and the 14th, 15th, and 16th Infantry Battalions, the 2nd Regiment of Artillery, as well as some military engineers. When Noer's force arrived, they found that the gates to the fortress had been left open for an unknown reason and promptly walked in, surprising the would-be defenders. After delivering a speech to the defenders, the prince secured the allegiance of the battalions and regiment of artillery to the provisional government. Danish officers who had been serving in the defence of the fortress were allowed to leave for Denmark on the assurance that they did not fight against Schleswig-Holstein in the coming war.
Course of the war
Wishing to defeat Denmark before Prussian, Austrian, and German troops arrived to support them, 7,000 Schleswig-Holsteinish soldiers under General Krohn occupied Flensborg on March 31. Over 7,000 Danish soldiers landed east of the city, and Krohn, fearing he would be surrounded, ordered his forces to withdraw. The Danes were able to reach the Schleswig-Holsteiners before they were able to retreat, and the subsequent Battle of Bov on April 9 was a Danish victory. At the battle, the Prince of Noer, senior commander of the Schleswig-Holsteinish forces, did not arrive until two hours after fighting had started, and the Schleswig-Holsteiners were more prepared for the withdrawal they had intended to make before they were attacked than for an engagement.
- April 12: The diet recognized the provisional government of Schleswig and commissioned Prussia to enforce its decrees. General Wrangel was also ordered to occupy Schleswig.
- April 23: Prussian victory in battle at Schleswig.
- April 23: German victory in battle at Mysunde.
- April 24: Prussian victory in battle at Oeversee.
- May 27: Battle at Sundeved.
- May 28: Battle at Nybøl.
- June 5: Danish victory over Germans in battle at Dybbøl Hill.
- June 7: Battle at Hoptrup.
- June 30: Battle at Bjerning.
The Germans had embarked on this course of participation in the Schleswig-Holstein War alone, without the European powers. The other European powers were united in opposing any dismemberment of Denmark, even Austria refusing to assist in enforcing the German view. Swedish troops landed to assist the Danes; Tsar Nicholas I of Russia, speaking with authority as head of the senior Gottorp line, pointed out to King Frederick William IV of Prussia the risks of a collision. Great Britain, though the Danes had rejected her mediation, threatened to send her fleet to assist in preserving the status quo. The fact that Prussia had entered the war on behalf of the revolutionary forces in Schleswig-Holstein, created a great number of ironies. The newly elected Frankfurt Diet tended to support the incursion into the Schleswig-Holstein War while King Frederick William did not. Indeed, Friedrich William ordered Friedrich von Wrangel, commanding the army of the German Confederation, to withdraw his troops from the duchies; but the general refused, asserting that he was under the command of the Diet of the German Confederation[clarification needed] and not of the King of Prussia but of the regent of Germany. Wrangel proposed that, at the very least, any treaty concluded should be presented for ratification to the Frankfurt Parliament. The Danes rejected this proposal and negotiations were broken off. Prussia was now confronted on the one side by the German nation urging her clamorously to action, on the other side by the European powers threatening dire consequences should she persist. After painful hesitation, Frederick William chose what seemed the lesser of two evils, and, on 26 August, Prussia signed a convention at Malmö which yielded to practically all the Danish demands. The Holstein estates appealed to the German diet, which hotly took up their cause, but it was soon clear that the central government had no means of enforcing its views. In the end the convention was ratified at Frankfurt. The convention was essentially nothing more than a truce establishing a temporary modus vivendi. The main issues, left unsettled, continued to be hotly debated.
In October, at a conference in London, Denmark suggested an arrangement on the basis of a separation of Schleswig from Holstein, which was about to become a member of a new German empire, with Schleswig having a separate constitution under the Danish crown.
- 27 January: The London conference result was supported by Great Britain and Russia and accepted by Prussia and the German parliament. The negotiations broke down, however, on the refusal of Denmark to yield the principle of the indissoluble union with the Danish crown.
- 23 February: The truce came to an end.
- 3 April: The war was renewed. At this point Nicholas I intervened in favour of peace. However, Prussia, conscious of her restored strength and weary of the intractable temper of the Frankfurt parliament, determined to take matters into her own hands.[clarification needed]
- 3 April: Danish victory over Schleswig-Holstein forces in battle at Adsbøl.
- 6 April: Battles at Ullerup and Avnbøl.
- 13 April: Danish victory over Saxon forces in battle at Dybbøl.
- 23 April: Battle at Kolding.
- 31 May: Danes stop Prussian advance through Jutland in cavalry battle at Vejlby.
- 4 June: inconclusive Battle of Heligoland (1849), the only naval combat of the war
- 6 July: Danish victory in sortie from Fredericia.
- 10 July: Another truce was signed. Schleswig, until the peace, was to be administered separately, under a mixed commission; Holstein was to be governed by a vicegerent of the German empire (an arrangement equally offensive to German and Danish sentiment). A settlement seemed as far off as ever. The Danes still clamoured for the principle of succession in the female line and union with Denmark, the Germans for that of succession in the male line and union with Holstein.
In April 1850, Prussia, which had pulled out of the war after the treaty of Malmö,[clarification needed] proposed a definitive peace on the basis of the status quo ante bellum and postponement of all questions as to mutual rights. To Palmerston the basis seemed meaningless and the proposed settlement would settle nothing. Nicholas I, openly disgusted with Frederick William's submission to the Frankfurt Parliament, again intervened. To him Duke Christian of Augustenborg was a rebel.
Russia had guaranteed Schleswig to the Danish crown by the treaties of 1767 and 1773. As for Holstein, if the King of Denmark could not deal with the rebels there, he himself would intervene as he had done in Hungary. The threat was reinforced by the menace of the European situation. Austria and Prussia were on the verge of war, and the sole hope of preventing Russia from entering such a war on the side of Austria lay in settling the Schleswig-Holstein question in a manner desirable to her. The only alternative, an alliance with the hated Napoleon Bonaparte's nephew, Louis Napoleon, who was already dreaming of acquiring the Rhine frontier for France in return for his aid in establishing German sea-power by the ceding of the duchies, was abhorrent to Frederick William.
- 8 April: Karl Wilhelm von Willisen became the Supreme Commander of the German Forces
- 2 July: A treaty of peace between Prussia and Denmark was signed at Berlin. Both parties reserved all their antecedent rights. Denmark was satisfied that the treaty empowered the King of Denmark to restore his authority in Holstein with or without the consent of the German Confederation. Danish troops now marched in to coerce the refractory duchies. While the fighting went on, negotiations among the powers continued.
- 24–25 July: Danish victory in the Battle of Idstedt.
- 28 July: Danish victory in cavalry battle at Jagel.
- 2 August: Great Britain, France, Russia and Sweden-Norway signed a protocol, to which Austria subsequently adhered, approving the principle of restoring the integrity of the Danish monarchy.
- 12 September: Battle at Missunde.
- 4 October: Danish forces resist German siege at Friedrichstadt.
- 24 November: Battle of Lottorf
- 31 December: Skirmish at Möhlhorst.
- May: The Copenhagen government made an abortive attempt to come to an understanding with the inhabitants of the duchies by convening an assembly of notables at Flensburg.
- 6 December 1851: The Copenhagen government announced a project for the future organization of the monarchy on the basis of the equality of its constituent states, with a common ministry.
- 28 January: A royal letter announced the institution of a unitary state which, while maintaining the fundamental constitution of Denmark, would increase the parliamentary powers of the estates of the two duchies. This proclamation was approved by Prussia and Austria, and by the German confederal diet insofar as it affected Holstein and Lauenburg. The question of the Augustenborg succession made an agreement between the powers impossible.
- 31 March: The Duke of Augustenborg resigned his claim in return for a money payment. Further adjustments followed.
- 8 May: another London Protocol was signed. The international treaty that became known as the "London Protocol" was the revision of the earlier protocol, which had been ratified on August 2, 1850, by the major Germanic powers of Austria and Prussia. The second, actual London Protocol was recognized by the five major European powers (the Austrian Empire, the Second French Republic, the Kingdom of Prussia, the Russian Empire, and the United Kingdom of Great Britain and Ireland), as well as the two major Baltic Sea powers of Denmark and Sweden.
The Protocol affirmed the integrity of the Danish federation as a "European necessity and standing principle". Accordingly, the duchies of Schleswig (a Danish fief) and Holstein, and Lauenburg (sovereign states within the German Confederation) were joined by personal union with the King of Denmark. For this purpose, the line of succession to the duchies was modified, because Frederick VII of Denmark remained childless and hence a change in dynasty was in order. (The originally conflicting protocols of succession between the duchies and Denmark would have stipulated that, contrary to the treaty, the duchies of Holstein and Lauenburg would have had heads of state other than the King of Denmark.) Further, it was affirmed that the duchies were to remain as independent entities, and that Schleswig would have no greater constitutional affinity to Denmark than Holstein.
This settlement did not resolve the issue, as the German Diet had steadfastly refused to recognize the treaty, and asserted that the law of 1650 was still in force, by which the Duchies were not united to the state of Denmark, but only to the direct line of the Danish kings, and were to revert on its extinction, not to the branch of Glucksburg, but to the German ducal family of Augustenburg. Only fifteen years passed before the Second Schleswig War in 1864 resulted in the incorporation of both duchies into the German Confederation, and later, in 1871, into the German Empire.
- History of Schleswig-Holstein
- Schleswig-Holstein Question
- Second Schleswig War
- German exonyms for places in Denmark
- Wars and battles involving Prussia
- Revolutions of 1848
- Rowly-Conwy, Peter. "THE CONCEPT OF PREHISTORY AND THE INVENTION OF THE TERMS ‘PREHISTORIC’ AND ‘PREHISTORIAN’: THE SCANDINAVIAN ORIGIN, 1833–1850". European Journal of Archaeology 9 (1): 103–130. doi:10.1177/1461957107077709.
- Schlürmann, Jan. "The Schleswig-Holstein Rebellion". Retrieved 2008-07-17.
- First Schleswig- Holstein War First War of the Danish Duchies
- Stenild, Jesper. "Battle of Bov – 9th of April 1848". Retrieved 2008-07-17.
- First Schleswig- Holstein War First War of the Danish Duchies
- Price, Arnold. "Schleswig-Holstein" in Encyclopedia of 1848 Revolutions (2005) online
- Steefel, Lawrence D. The Schleswig-Holstein Question. 1863–1864 (Harvard U.P. 1923).
- Svendsen, Nick "The First Schleswig-Holstein War 1848–50"
|Wikimedia Commons has media related to First Schleswig War.|
- Guns used at the Battle of Fredericia
- Time-line of Danish history
- Die Schlacht bei Idstedt (German)
- Die Schlacht bei Idstedt im Jahre 1850 (German)
- Painting of the Battle of Isted
- Maps of Europe during the First Schleswig War (omniatlas)
- First Schleswig-Holstein War - First War of the Danish Duchies | https://en.wikipedia.org/wiki/First_Schleswig_War |
4.0625 | Most of the excess carbon dioxide pouring into the atmosphere from the burning of fossil fuels will ultimately be absorbed by the oceans, but it will take about 100,000 years. That is how long it took for ocean chemistry to recover from a massive input of carbon dioxide 55 million years ago, according to a study published this week in the journal Science.
James Zachos, professor of Earth sciences at the University of California, Santa Cruz, led an international team of scientists that analyzed marine sediments deposited during a period of extreme global warming known as the Paleocene-Eocene Thermal Maximum (PETM). Sediment cores drilled from the ocean floor revealed an abrupt change in ocean chemistry at the start of the PETM 55 million years ago, followed by a long, slow recovery.
"Most people have not thought about the long-term fate of all that carbon and what's involved in removing it from the system. There is a long timescale for the recovery, tens of thousands of years before atmospheric carbon dioxide will start to come back down to preindustrial levels," Zachos said.
Earlier studies using computers to run numerical models of Earth's carbon cycle have calculated similarly long timescales for absorption of the carbon dioxide currently being released into the atmosphere from fossil fuels, he said.
"Our findings are consistent with what the models have been showing for years. What we found validates those geochemical models," Zachos said.
The oceans have a tremendous capacity to absorb carbon dioxide from the atmosphere. Results from a large international research effort published last year indicated that the oceans have already absorbed nearly half of the carbon dioxide produced by humans in the past 200 years--about 120 billion metric tons of carbon.
When carbon dioxide dissolves in water it makes the water more acidic. Ocean acidification starts at the surface and spreads to the deep sea as surface waters mix with deeper layers. The sediment cores studied by Zachos and his coworkers showed the effects of a rapid acidification of the ocean during the PETM. The acidification was more severe than they had expected, suggesting that the amount of carbon dioxide that entered the atmosphere and triggered global warming during the PETM was much greater than previously thought.
The leading explanation for the PETM is a massive release of methane from frozen deposits found in the deep ocean near continental margins. The methane reacted with oxygen to produce huge amounts of carbon dioxide. Both methane and carbon dioxide are potent greenhouse gases and caused temperatures to soar during the PETM. Average global temperatures increased by about 9 degrees Fahrenheit (5 degrees Celsius), and the fossil record shows dramatic changes during this time in plant and animal life, both on land and in the oceans.
Previous estimates for the amount of greenhouse gas released into the atmosphere during the PETM were around 2,000 billion tons of carbon. Zachos said at least twice that much would be required to produce the changes observed in this new study.
"This is similar to the estimated flux from fossil fuel combustion over the next three centuries," he said. "If we combust all known fossil fuel reserves, that's about 4,500 billion tons of carbon. And now we know that the recovery time for a comparable release of carbon in the past was about 100,000 years."
The study's conclusions hinge on the effects of ocean acidification on the chemistry of calcium carbonate, the mineral from which certain kinds of phytoplankton (microscopic algae) and other marine organisms build their shells. When these organisms die, their shells rain down onto the seafloor. Marine sediments are typically rich in calcium carbonate from these shells, but increased acidity causes it to dissolve. The dissolution of calcium carbonate enables the ocean to store large amounts of carbon dioxide in the form of bicarbonate ions.
"The calcium carbonate sitting on the seafloor increases the ocean's buffering capacity, so that it can eventually neutralize most of the changes in acidity caused by the carbon dioxide accumulating in the atmosphere," Zachos said.
Sediments deposited at the start of the PETM show an abrupt transition from carbonate-rich ooze to a dark-red clay layer in which the carbonate shells are completely gone. Above the clay layer, the carbonates gradually begin to reappear.
This transition at the Paleocene-Eocene boundary was already well known from previous studies of sediment cores by Zachos and others. The new Science paper, however, presents the first results from a series of sediment cores covering the PETM over a broad range of depths in the ocean. The cores were recovered in 2003 from Walvis Ridge in the southeastern Atlantic Ocean.
This series of sediment cores enabled the researchers to trace changes in ocean chemistry over time at different depths in the ocean. This is important because the chemical equilibrium between solid calcium carbonate (calcite) and dissolved calcium and carbonate ions changes with depth. The dissolution of calcite increases not only with acidity, but also at the colder temperatures and higher pressures found in the deep ocean.
At a certain depth--currently 4 kilometers (2.4 miles) in the southern Atlantic--the calcite shells of dead plankton drifting down from the surface waters begin to dissolve. The point at which the dissolution rate exceeds the supply rate of calcite from above is called the carbonate compensation depth (CCD). The distinctive layers of clay that mark the PETM in sediment cores indicate that those sites were below the CCD at the time those sediments accumulated.
In the series of sediment cores from different depths on Walvis Ridge, Zachos and his coworkers observed a rapid shoaling (rising toward the surface) of the CCD due to the acidification of ocean waters.
"The CCD shoaled quickly from below the deepest site to above the shallowest site, producing a clay layer with no carbonate. And then the carbonate starts to reappear, first at the shallowest site, then deeper, eventually reaching the deepest site," Zachos said. "The time lag before the carbonates start to reappear is about 40 to 50 thousand years, and then it's another 40 thousand years before you see the normal carbonate-rich ooze again."
The dissolution of calcium carbonate provides only temporary storage of carbon dioxide. When the dissolved ions recombine to form calcite again, carbon dioxide is released. The long-term storage of carbon dioxide is accomplished through chemical weathering of silicate rocks, such as granite and basalt, on the land. As weathering removes carbon dioxide, however, the same buffering process that slowed the accumulation of carbon dioxide in the atmosphere starts to operate in reverse, gradually releasing stored carbon from the ocean back into the atmosphere.
"The ocean's role is to act like a temporary store for the carbon until these chemical weathering processes can remove it from the system. This is the theory of ocean carbonate chemistry that we were taught in graduate school, and here is a case study where you can actually see it happen," Zachos said.
These changes in ocean chemistry during the PETM coincided with a sharp reduction in marine biodiversity. For example, many species of bottom-dwelling phytoplankton that form calcite shells went extinct, possibly as a direct result of ocean acidification, Zachos said.
Within the past year, scientists have begun to detect similar changes in ocean chemistry in response to the rise in atmospheric carbon dioxide from fossil fuel consumption and other human activities. Researchers have also begun to worry about the potential ecological effects of ocean acidification. Whatever the effects may be of current increases in atmospheric carbon dioxide, we will probably have to live with them for a long time.
"Even after humans stop burning fossil fuels, the impacts will be long-lasting," Zachos said.
In addition to Zachos, the authors of the Science paper are Ursula Röhl of the University of Bremen, Germany; Stephen Schellenberg of San Diego State University; Appy Sluijs of Utrecht University, The Netherlands; David Hodell of the University of Florida; Daniel Kelly of the University of Wisconsin, Madison; Ellen Thomas of Wesleyan University and Yale University; Micah Nicolo of Rice University; Isabella Raffi of G. d'Annunzio University, Italy; Lucas Lourens of Utrecht University; Heather McCarren, a graduate student in Earth sciences at UC Santa Cruz; and Dick Kroon of Vrije University, The Netherlands.
This study was conducted as part of a five-year interdisciplinary project funded by the National Science Foundation to investigate the consequences of greenhouse warming for biocomplexity and biogeochemical cycles. In a related study led by Lourens and published this week in the journal Nature, the researchers reported on a similar, less extreme global warming event that occurred 2 million years after the PETM.
Source: University of California, Santa Cruz
Explore further: NASA takes part in airborne study of Antarctic seas | http://phys.org/news/2005-06-recovery-extreme-global-episode-million.html |
4.09375 | |Families & Genera|
Planctomycetes are a phylum of aquatic bacteria and are found in samples of brackish, and marine and fresh water. They reproduce by budding. In structure, the organisms of this group are ovoid and have a holdfast, at the tip of a thin cylindrical extension from the cell body called the stalk, at the nonreproductive end that helps them to attach to each other during budding.
For a long time bacteria belonging to this group were considered to lack peptidoglycan, (also called murein) in their cell walls, which is an important heteropolymer present in most bacterial cell walls that serves as a protective component. It was thought that instead their walls were made up of glycoprotein which is rich in glutamate. Recently, however, representatives of all three clades within the Planctomycetes were found to possess peptidoglycan containing cell walls.
Planctomycetes have a distinctive morphology with the appearance of membrane-bound internal compartments, often referred to as the paryphoplasm (ribosome-free space), pirellulosome (ribosome-containing space) and nucleoid (condensed nucleic acid region, in these species surrounded by a double membrane). Until the discovery of the Poribacteria, planctomycetes were the only bacteria known with these apparent internal compartments. Three-dimensional electron tomography reconstruction of a representative species, Gemmata obscuriglobus, has yielded varying interpretations of this observation. One 2013 study found the appearance of internal compartments to be due to a densely invaginated but continuous single membrane, concluding that only the two compartments typical of Gram-negative bacteria - the cytoplasm and periplasm - are present. However, the excess membrane triples the surface area of the cell relative to its volume, which may be related to Gemmata's sterol biosynthesis abilities. A 2014 study using similar methods reported confirmation of the earlier enclosed compartment hypothesis.
It has recently been shown that Gemmata obscuriglobus is able to take in large molecules via a process which in some ways resembles endocytosis, the process used by eukaryotic cells to engulf external items.
RNA sequencing shows that the planctomycetes are related to the Verrucomicrobia and possibly the Chlamydiae. A number of essential pathways are not organised as operons, which is unusual for bacteria. A number of genes have been found (through sequence comparisons) that are similar to genes found in eukaryotes. One such example is a gene sequence (in Gemmata obscuriglobus) that was found to have significant homology to the integrin alpha-V, a protein that is important in transmembrane signal transduction in eukaryotes.
The life cycle of many planctomycetes involves alternation between sessile cells and flagellated swarmer cells. The sessile cells bud to form the flagellated swarmer cells which swim for a while before settling down to attach and begin reproduction.
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LSPN) and the phylogeny is based on 16S rRNA-based LTP release 111 by The All-Species Living Tree Project.
♠ Strains found at the National Center for Biotechnology Information (NCBI) but not listed in the List of Prokaryotic names with Standing in Nomenclature (LSPN)
♪ Prokaryotes where no pure (axenic) cultures are isolated or available, i. e. not cultivated or can not be sustained in culture for more than a few serial passages
- Jeske, O., Schueler, M., Schumann, P., Schneider, A., Boedeker, C., Jogler, M., Bollschweiler, D., Rohde, M., Mayer, C., Engelhardt, H., Spring, S. & Jogler, C. (2015). "Planctomycetes do possess a peptidoglycan cell wall". Nature communications 6: 7116. doi:10.1038/ncomms8116. PMID 25964217.
- van Teeseling, M.C.F., Mesman, R.J., Kuru, E., Espaillat, A., Cava, F., Brun, Y.V., VanNieuwenhze, M.S., Kartal, B & van Niftrik, L. (2015). "Anammox Planctomycetes have a peptidoglycan cell wall". Nature communications 6: 6878. doi:10.1038/ncomms7878.
- Lindsay, M. R.; Webb, R. I.; Strous, M; Jetten, M. S.; Butler, M. K.; Forde, R. J.; Fuerst, J. A. (2001). "Cell compartmentalisation in planctomycetes: Novel types of structural organisation for the bacterial cell". Archives of microbiology 175 (6): 413–29. doi:10.1007/s002030100280. PMID 11491082.
- Glöckner, F. O.; Kube, M; Bauer, M; Teeling, H; Lombardot, T; Ludwig, W; Gade, D; Beck, A; Borzym, K; Heitmann, K; Rabus, R; Schlesner, H; Amann, R; Reinhardt, R (2003). "Complete genome sequence of the marine planctomycete Pirellula sp. Strain 1". Proceedings of the National Academy of Sciences 100 (14): 8298–303. doi:10.1073/pnas.1431443100. PMC 166223. PMID 12835416.
- Fieseler, L; Horn, M; Wagner, M; Hentschel, U (June 2004). "Discovery of the novel candidate phylum "Poribacteria" in marine sponges.". Applied and environmental microbiology 70 (6): 3724–32. doi:10.1128/aem.70.6.3724-3732.2004. PMC 427773. PMID 15184179.
- Santarella-Mellwig, R., Pruggnaller, S., Roos, N., Mattaj, I., & Devos, D. (2013). "Three-Dimensional Reconstruction of Bacteria with a Complex Endomembrane System". PLoS Biology 11: e1001565. doi:10.1371/journal.pbio.1001565. PMID 23700385.
- Sagulenko, E; Morgan, G. P.; Webb, R. I.; Yee, B; Lee, K. C.; Fuerst, J. A. (2014). "Structural studies of planctomycete Gemmata obscuriglobus support cell compartmentalisation in a bacterium". PLoS ONE 9 (3): e91344. doi:10.1371/journal.pone.0091344. PMC 3954628. PMID 24632833.
- Lonhienne, Thierry G. A.; Sagulenko, Evgeny; Webb, Richard I.; Lee, Kuo-Chang; Franke, Josef; Devos, Damien P.; Nouwens, Amanda; Carroll, Bernard J. & Fuerst, John A. (2010). "Endocytosis-like protein uptake in the bacterium Gemmata obscuriglobus". Proceedings of the National Academy of Sciences 107 (29): 12883–12888. doi:10.1073/pnas.1001085107. PMC 2919973. PMID 20566852.
- Williams, Caroline (2011). "Who are you calling simple?". New Scientist 211 (2821): 38–41. doi:10.1016/S0262-4079(11)61709-0
- Hou S., Makarova K.S., Saw J.H., Senin P., Ly B.V., Zhou Z., Ren Y., Wang J., Galperin M.Y., Omelchenko M.V., Wolf Y.I., Yutin N., Koonin E.V., Stott M.B., Mountain B.W., Crowe M.A., Smirnova A.V., Dunfield P.F., Feng L., Wang L., Alam M. 2008 Complete genome sequence of the extremely acidophilic methanotroph isolate V4, Methylacidiphilum infernorum, a representative of the bacterial phylum Verrucomicrobia. Biol. Direct. 3(1):26.
- F. O. Glöckner, M. Kube, M. Bauer, H. Teeling, T. Lombardot, W. Ludwig, D. Gade, A. Beck, K. Borzym, K. Heitmann, R. Rabus, H. Schlesner, R. Amann, and R. Reinhardt (2003) Complete genome sequence of the marine planctomycete Pirellula sp. strain 1 PNAS 100:14 8298-8303 doi=10.1073/pnas.1431443100 pmid= 12835416 pmc=166223
- Cheryl Jenkins, Vishram Kedar, and John A. Fuerst (2002) Gene discovery within the planctomycete division of the domain Bacteria Genome Biology 3:6 1-11
- See the List of Prokaryotic names with Standing in Nomenclature. Data extracted from the "Planctomycetes". Retrieved 2013-03-20.
- See the All-Species Living Tree Project . Data extracted from the "16S rRNA-based LTP release 111 (full tree)" (PDF). Silva Comprehensive Ribosomal RNA Database. Retrieved 2013-03-20.
|Wikispecies has information related to: Planctomycetes| | https://en.wikipedia.org/wiki/Planctomycetes |
4 | |This article needs additional citations for verification. (January 2012)|
A DC-to-DC converter is an electronic circuit or electromechanical device which converts a source of direct current (DC) from one voltage level to another. It is a type of electric power converter. Power levels range from very low (small batteries) to very high (high-voltage power transmission).
Before the development of power semiconductors and allied technologies, one way to convert the voltage of a DC supply to a higher voltage, for low-power applications, was to convert it to AC by using a vibrator, followed by a step-up transformer and rectifier. For higher power an electric motor was used to drive a generator of the desired voltage (sometimes combined into a single "dynamotor" unit). These were relatively inefficient and expensive procedures used only when there was no alternative, as to power a car radio (which then used thermionic valves/tubes requiring much higher voltages than available from a 6 or 12V car battery). The introduction of power semiconductors and integrated circuits made it economically viable to use techniques as described below, for example to convert the DC power supply to high-frequency AC, use a small transformer to change the voltage, and rectify back to DC. Although by 1976 transistor car radio receivers did not require high voltages, some amateur radio operators continued to use vibrator supplies and dynamotors (a motor and generator combined into one unit, with one winding driving the motor and the other generating the output voltage) for mobile transceivers requiring high voltages, although transistorised power supplies were available.
While it was possible to derive a lower voltage from a higher with a linear electronic circuit, or even a resistor, these methods dissipated the excess as heat; efficient conversion only became possible with solid-state switch-mode circuits.
DC to DC converters are used in portable electronic devices such as cellular phones and laptop computers, which are supplied with power from batteries primarily. Such electronic devices often contain several sub-circuits, each with its own voltage level requirement different from that supplied by the battery or an external supply (sometimes higher or lower than the supply voltage). Additionally, the battery voltage declines as its stored energy is drained. Switched DC to DC converters offer a method to increase voltage from a partially lowered battery voltage thereby saving space instead of using multiple batteries to accomplish the same thing.
Most DC to DC converter circuits also regulate the output voltage. Some exceptions include high-efficiency LED power sources, which are a kind of DC to DC converter that regulates the current through the LEDs, and simple charge pumps which double or triple the output voltage.
Transformers used for voltage conversion at mains frequencies of 50–60 Hz must be large and heavy for powers exceeding a few watts. This makes them expensive, and they are subject to energy losses in their windings and due to eddy currents in their cores. DC-to-DC techniques that use transformers or inductors work at much higher frequencies, requiring only much smaller, lighter, and cheaper wound components. Consequently these techniques are used even where a mains transformer could be used; for example, for domestic electronic appliances it is preferable to rectify mains voltage to DC, use switch-mode techniques to convert it to high-frequency AC at the desired voltage, then, usually, rectify to DC. The entire complex circuit is cheaper and more efficient than a simple mains transformer circuit of the same output.
Linear regulators which are used to output a stable DC independent of input voltage and output load from a higher but less stable input by dissipating excess volt-amperes as heat, could be described literally as DC-to-DC converters, but this is not usual usage. (The same could be said of a simple voltage dropper resistor, whether or not stabilised by a following voltage regulator or Zener diode.)
Practical electronic converters use switching techniques. Switched-mode DC-to-DC converters convert one DC voltage level to another, which may be higher or lower, by storing the input energy temporarily and then releasing that energy to the output at a different voltage. The storage may be in either magnetic field storage components (inductors, transformers) or electric field storage components (capacitors). This conversion method can increase or decrease voltage. Switching conversion is more power efficient (often 75% to 98%) than linear voltage regulation, which dissipates unwanted power as heat. Fast semiconductor device rise and fall times are required for efficiency; however, these fast transitions combine with layout parasitic effects to make circuit design challenging. The higher efficiency of a switched-mode converter reduces the heatsinking needed, and increases battery endurance of portable equipment. Efficiency has improved since the late 1980s due to the use of power FETs, which are able to switch more efficiently with lower switching losses at higher frequencies than power bipolar transistors, and use less complex drive circuitry. Another important improvement in DC-DC converters is replacing the flywheel diode by synchronous rectification using a power FET, whose "on resistance" is much lower, reducing switching losses. Before the wide availability of power semiconductors, low-power DC-to-DC synchronous converters consisted of an electro-mechanical vibrator followed by a voltage step-up transformer feeding a vacuum tube or semiconductor rectifier, or synchronous rectifier contacts on the vibrator.
Most DC-to-DC converters are designed to move power in only one direction, from dedicated input to output. However, all switching regulator topologies can be made bidirectional and able to move power in either direction by replacing all diodes with independently controlled active rectification. A bidirectional converter is useful, for example, in applications requiring regenerative braking of vehicles, where power is supplied to the wheels while driving, but supplied by the wheels when braking.
Switching converters are electronically complex, although this is embodied in integrated circuits, with few components needed. They need careful design of the circuit and physical layout to reduce switching noise (EMI / RFI) to acceptable levels and, like all high-frequency circuits, for stable operation. Cost was higher than linear regulators in voltage-dropping applications, but this dropped with advances in chip design.
DC-to-DC converters are available as integrated circuits (ICs) requiring few additional components. Converters are also available as complete hybrid circuit modules, ready for use within an electronic assembly.
In these DC-to-DC converters, energy is periodically stored within and released from a magnetic field in an inductor or a transformer, typically within a frequency range of 300 kHz to 10 MHz. By adjusting the duty cycle of the charging voltage (that is, the ratio of the on/off times), the amount of power transferred to a load can be more easily controlled, though this control can also be applied to the input current, the output current, or to maintain constant power. Transformer-based converters may provide isolation between input and output. In general, the term "DC-to-DC converter" refers to one of these switching converters. These circuits are the heart of a switched-mode power supply. Many topologies exist. This table shows the most common ones.
||Step-down (Buck) - The output voltage is lower than the input voltage, and of the same polarity||
|True Buck-Boost - The output voltage is the same polarity as the input and can be lower or higher|
|Split-Pi (Boost-Buck) - Allows bidirectional voltage conversion with the output voltage the same polarity as the input and can be lower or higher.|
||Flyback - 1 transistor drive|
In addition, each topology may be:
- Hard switched - transistors switch quickly while exposed to both full voltage and full current
- Resonant - an LC circuit shapes the voltage across the transistor and current through it so that the transistor switches when either the voltage or the current is zero
Magnetic DC-to-DC converters may be operated in two modes, according to the current in its main magnetic component (inductor or transformer):
- Continuous - the current fluctuates but never goes down to zero
- Discontinuous - the current fluctuates during the cycle, going down to zero at or before the end of each cycle
A converter may be designed to operate in continuous mode at high power, and in discontinuous mode at low power.
The Half bridge and Flyback topologies are similar in that energy stored in the magnetic core needs to be dissipated so that the core does not saturate. Power transmission in a flyback circuit is limited by the amount of energy that can be stored in the core, while forward circuits are usually limited by the I/V characteristics of the switches.
Although MOSFET switches can tolerate simultaneous full current and voltage (although thermal stress and electromigration can shorten the MTBF), bipolar switches generally can't so require the use of a snubber (or two).
High-current systems often use multiphase converters, also called interleaved converters. Multiphase regulators can have better ripple and better response times than single-phase regulators.
Switched capacitor converters rely on alternately connecting capacitors to the input and output in differing topologies. For example, a switched-capacitor reducing converter might charge two capacitors in series and then discharge them in parallel. This would produce the same output power (less that lost to efficiency of under 100%) at, ideally, half the input voltage and twice the current. Because they operate on discrete quantities of charge, these are also sometimes referred to as charge pump converters. They are typically used in applications requiring relatively small currents, as at higher currents the increased efficiency and smaller size of switch-mode converters makes them a better choice. They are also used at extremely high voltages, as magnetics would break down at such voltages.
A motor-generator set, mainly of historical interest, consists of an electric motor and generator coupled together. A dynamotor combines both functions into a single unit with coils for both the motor and the generator functions wound around a single rotor; both coils share the same outer field coils or magnets. Typically the motor coils are driven from a commutator on one end of the shaft, when the generator coils output to another commutator on the other end of the shaft. The entire rotor and shaft assembly is smaller in size than a pair of machines, and may not have any exposed drive shafts.
Motor-generators can convert between any combination of DC and AC voltage and phase standards. Large motor-generator sets were widely used to convert industrial amounts of power while smaller units were used to convert battery power (6, 12 or 24 V DC) to a high DC voltage, which was required to operate vacuum tube (thermionic valve) equipment.
For lower-power requirements at voltages higher than supplied by a vehicle battery, vibrator or "buzzer" power supplies were used. The vibrator oscillated mechanically, with contacts that switched the polarity of the battery many times per second, effectively converting DC to square wave AC, which could then be fed to a transformer of the required output voltage(s). It made a characteristic buzzing noise.
- A converter where output voltage is lower than the input voltage (like a Buck converter).
- A converter that outputs a voltage higher than the input voltage (like a Boost converter).
- Continuous Current Mode
- Current and thus the magnetic field in the inductive energy storage never reach zero.
- Discontinuous Current Mode
- Current and thus the magnetic field in the inductive energy storage may reach or cross zero.
- Unwanted electrical and electromagnetic signal noise, typically switching artefacts.
- RF noise
- Switching converters inherently emit radio waves at the switching frequency and its harmonics. Switching converters that produce triangular switching current, such as the Split-Pi, forward converter, or Ćuk converter in continuous current mode, produce less harmonic noise than other switching converters. RF noise causes electromagnetic interference (EMI). Acceptable levels depend upon requirements, e.g. proximity to RF circuitry needs more suppression than simply meeting regulations.
- Input noise
- The input voltage may have non-negligible noise. Additionally, if the converter loads the input with sharp load edges, the converter can emit RF noise from the supplying power lines. This should be prevented with proper filtering in the input stage of the converter.
- Output noise
- The output of an ideal DC-to-DC converter is a flat, constant output voltage. However, real converters produce a DC output upon which is superimposed some level of electrical noise. Switching converters produce switching noise at the switching frequency and its harmonics. Additionally, all electronic circuits have some thermal noise. Some sensitive radio-frequency and analog circuits require a power supply with so little noise that it can only be provided by a linear regulator. Some analog circuits which require a power supply with relatively low noise can tolerate some of the less-noisy switching converters, e.g. using continuous triangular waveforms rather than square waves.[not in citation given]
- "Vibrator Power Supplies". Radioremembered.org. Retrieved 18 January 2016.
- Ed Brorein (2012-05-16). "Watt's Up?: What Is Old is New Again: Soft-Switching and Synchronous Rectification in Vintage Automobile Radios". Keysight Technologies: Watt's Up?. Retrieved 2016-01-19.
- There is at least one example of a very large (three refrigerator-sized cabinets) and complex pre-transistor switching regulator using thyratron gas-filled tubes, although they appear to be used as regulators rather than for DC-to-DC conversion as such. This was the 1958 power supply for the IBM 704 computer, using 90kW of power.
- Radio Amateur's Handbook 1976, pub. ARRL, p331-332
- Andy Howard (2015-08-25). "How to Design DC-to-DC Converters". YouTube. Retrieved 2015-10-02.
- Stephen Sangwine (2 March 2007). Electronic Components and Technology, Third Edition. CRC Press. p. 73. ISBN 978-1-4200-0768-8.
- Jeff Barrow of Integrated Device Technology, Inc. (21 November 2011). "Understand and reduce DC/DC switching-converter ground noise". Eetimes.com. Retrieved 18 January 2016.
- Damian Giaouris et al. "Foldings and grazings of tori in current controlled interleaved boost converters". doi: 10.1002/cta.1906.
- Ron Crews and Kim Nielson. "Interleaving is Good for Boost Converters, Too". 2008.
- Keith Billings. "Advantages of Interleaving Converters". 2003.
- John Gallagher "Coupled Inductors Improve Multiphase Buck Efficiency". 2006.
- Juliana Gjanci. "On-Chip Voltage Regulation for Power Management inSystem-on-Chip". 2006. p. 22-23.
- Majumder, Ritwik; Ghosh, Arindam; Ledwich, Gerard F.; Zare, Firuz (2008). "Control of Parallel Converters for Load Sharing with Seamless Transfer between Grid Connected and Islanded Modes". eprints.qut.edu.au. Retrieved 2016-01-19.
- Iqbal, Sajid; Ahmed, Masood; Qureshi, Suhail A. (2007). "Investigation of Chaotic Behavior in DC-DC Converters". International Journal of Electrical, Computer, Electronics and Communication Engineering. WASET. pp. 1271–1274.
- Tse, Chi K.; Bernardo, Mario Di (2002). Complex behavior in switching power converters. Proceedings of the IEEE. pp. 768–781.
- Iqbal, Sajid; et al. (2014). Study of bifurcation and chaos in dc-dc boost converter using discrete-time map. IEEE International Conference on Mechatronics and Control (ICMC'2014) 2014. doi:10.1109/ICMC.2014.7231874.
- Fossas, , Enric; Olivar, Gerard (1996). "Study of chaos in the buck converter". Circuits and Systems I: Fundamental Theory and Applications, IEEE Transactions on: 13–25.
- Making -5V 14-bit Quiet, section of Linear Technology Application Note 84, Kevin Hoskins, 1997, pp 57-59 | https://en.wikipedia.org/wiki/DC_to_DC_converter |
4.375 | We all (now) know that the World is curved. At kindergarten weíre told the Earth is like a ball.
At high-school weíre told itís a called a sphere.
At college we learn itís an oblate-spheroid. (After millions of years of spinning, itís a little fatter around the equator than it is going around over the Poles). The difference is small, but very measurable.
If you go on to study further, youíll drown in the mathematics of Spherical Harmonics. Iím not going to delve into that today. (I've been there, unless math is your passion, you might want to play elsewhere. I have no interest in returning!).
Today, weíre going to go back to a kindergarten model and assume the Earth is a round ball. We're going to look at a couple of consequences of that. First off, how far can you see?
Because the World is not flat, you canít see forever. As you look out to the horizon the planet falls away from you. This limits what you can see. Just how fast does this happen? How far can you see? Letís break out a little math to find out.
If we stick with assumption that the Earth is a sphere (and a smooth and uniform one at that), we can produce a diagram like the one on the left.
A man of height h stands on a sphere of radius R and looks out to the horizon. The furthest point he can see is defined by the tangent that grazes the Earth and passes through his eye.
By definition, a tangent is normal (perpendicular) to the radius and so we can create a right-angle triangle with a hypotenuse of length (R+h). For the two sides of the right-angle, one will be of length R, and the other will be the distance the man can see (which weíll label as d).
Using Pythagroras, we can creation an equation showing the relationship between all three sides of the triangle.
Expanding, then simplifying, allows us to compute an equation giving the distance to the horizon based on the radius of the the planet and the height of the viewer.
Note – Of the two terms under the square root, the left term dwarfs the right term. I'll keep it in there, but as we run through some calculations you'll see that, until we reach a viewing altitude of something like a spacecraft in orbit, it can be ignored.
An adult person of around 6 ft tall has eyes approximately 1.8 m above the ground (yes, I'm using the Metric system). For the radius of the Earth, I'm going to use the value 6,371 Km (the mean radius).
Plugging in these values …
(We can see here, as commented above, that the right little term is dwarfed by the left term involving the radius).
The solution is that the furthest an adult can see is 4.79 Km (which is just short of 3 miles).
A little child is closer to the ground. An average four year old is around 1 m tall. A child at this height can see just 3.57 Km (Eight tenths of a mile less than an adult).
Tip – Put your child on your shoulders, and when they are sitting there, they should be able to spot things on the horizon around half a mile before you! (This is useful to know when dragons are chasing you).
What if we're higher up?
The top observation deck of the Eiffel Tower is at 273 m.
From this altitude, the horizon will be 58.98 Km away (36.65 miles).
The top floor of the Burj Khalifa tower in Dubai is 621 m high.
This would give a viewer a theortical view over the desert of 88.96 Km (55.28 miles).
An airliner cruising at 38,000 ft is at an altitude of 11.58 Km
Plugging this into the formula reveals an answer of 384.3 Km, which is 238.8 miles.
NOTE – The distance to the horizon we caclulate with our derived forumla is the straight line distance from the viewer to the ground (the edges of the cone in the diagram to the left). How different is this to the distance that would be measured if we started directly under the plane, and pulled a tape measure around globe to the point where the tangent scrapes the surface?
When we were dealing with small altitudes, the difference between the straight line distance, and the ground distance along the curve of a planet was insignificant.
Let's calculate the delta between these two distances for our cruising aircraft.
To do this, we'll use a little trigonometry. Below is a diagram showing the subtended angle (from the center of the Earth) and it's relationship to the distance to the horizon.
We know the circumferance of the Earth, it's 2πR
If we know the subtended angle, we can determine what ratio this represents of a full circle, and thus what ratio of the circumference this represents. This will give us the distance along the ground betweeen the two points.
Using the values of d=384.3 Km and R=6,371 Km, and using the tan-1 button on our calculator we can calculate the subtended angle. It's 3.452°
(3.452/360) x (2 x π x 6371) = 383.8 Km
For a plane at high altitude, the difference between the straight line distance to the horizon and the curved distance on the Earth is about 500 m. (The difference when dealing with our first example of a man of height 1.8 m is a fraction of a millimeter).
This segues nicely into our next example, the Golden Gate Bridge.
The Golden Gate Bridge, possibly the most photographed bridge in the World, spans the opening of San Francisco Bay. Completed in 1937, it boasts two towers of height 230 m and these are 1,280 m apart.
Trivia - Want to paint your house the same color as the Golden Gate Bridge? Here are the CMYK colors: C= Cyan: 0%, M =Magenta: 69%, Y =Yellow: 100%, K = Black: 6%.
Because of the curvature of the Earth, the towers are a little wider apart at the top than at the bottom (see exagerate diagram below). Let's calculate how big this difference is.
We're assuming the towers are constructed perpendicular to the ground. We'll use l to represent the distance between the two towers at their base (1,280 m), and t to represent the height of the towers above the ground (230 m). We'll use x to describe the difference between the towers at their tops.
Similar to above, we'll calculate a ratio using the subtended angle. There are two similar segments, one with arc length l and a radius of R, and one with an arc length of l+x and a radius of R+t. Both of these segments share the same subtended angle on the globe.
The top equation on the right shows the ratio of the lengths of the arcs to their respective circumferences.
A few lines later, and we have a simple formula for the difference between the two arcs.
Plugging the values in reveals that x = 46.2 mm (that's about 113/16").
The top of the Golden Gate Bridge is almost two inches wider at the top than the base because of the curvature of the Earth!
Tell everyone in the car this next time you drive over it!
Before my inbox gets flooded with comments, yes, it's true, my horizon calculations were very simple and I assumed no complications with the refraction in the atmosphere. In real life, our atmosphere is a very complex and dynamic organism filled with a mixture of gasses and vapors. It has pressure gradients, winds, temperature gradients and refracive index differences. Much of these variances act in your favor, and on most days, because of refraction (the bending of light between media of different densities), the distance you can see to the horizon is greater than simply geometery calculates. | http://www.datagenetics.com/blog/june32012/index.html |
4.5 | Over the past decade social stories have shown promise as a positive and proactive classroom strategy for teaching social skills to children with autism spectrum disorders (ASD). They continue to be widely discussed, reviewed, and recommended as an effective and user-friendly behavioral intervention. Social stories allow the child to receive direct instruction in learning the appropriate social behaviors that are needed for success in the classroom setting. The simplicity and utility of social stories make them a popular choice for use in both general and special education settings. Both the National Autism Center (NAC; 2015) and the National Professional Development Center on Autism (NPDC; 2015) have identified story-based intervention as an evidence-based practice.
What is a Social Story?
A social story is a short story that is written in a child specific format describing a social situation, person, skill, experience, or concept in terms of relevant cues and appropriate social behavior. The objective of this intervention strategy is to enhance a child’s understanding of social situations and teach an appropriate behavioral response that can be practiced. Each story is designed to teach the child how to manage his or her own behavior during a specific social situation by describing where the activity will take place, when it will occur, what will happen, who is involved, and why the child should behave in a certain way. In essence, social stories seek to answer the who, what, when, where, and why aspects of a social situation in order to improve the child’s perspective taking. Subsequent social interactions allow for the frequent practice of the described behavioral response cue and the learning of new social behavior. Although a number of commercial publications offer generic social stories for common social situations, it is best to individualize the content of the story according to the child’s unique behavioral needs.
Writing a Social Story
Social stories follow an explicit format of approximately 5 to 10 sentences describing the social skill, the appropriate behavior, and others’ viewpoint (perspective) of the behavior. These sentences are written according to comprehension level of the child and include the following basic sentence types.
- Descriptive sentences which provide statements of fact and objectively define the “wh” question of the social situation.
- Directive sentences that describe the desired behavior and generally begin with “I will work on” or “I will try.”
- Perspective sentences which describe other individual’s reaction and feelings associated with the target situation.
- Affirmative sentences which stress a rule or directive in the story.
- Control sentences that help the child to remember the directive.
- Cooperative sentences that describe who will help and how help will be given.
The social story should be written in a way that ensures accuracy of interpretation, using vocabulary and print size appropriate for the child’s ability. Pictures illustrating the concept can be included for children who have difficulty reading text without cues. They can be simple line drawings, clip art, books, or actual photographs. An example of a social story (text only) is provided at the end of this article.
Implementing a Social Story
Social stories should not be used in isolation and are not intended to address all of the behavioral challenges of the child with ASD. Rather, they should be integrated into the student’s IEP or behavior support plan on a daily basis to complement other interventions and strategies. When the social story is first implemented, the teacher must be certain that the child understands the story and social skill being taught. The child can then read the story independently, read it aloud to an adult, or listen as the adult reads the story. The most appropriate method is dependent upon the individual abilities and needs of the child. Regardless of how the story is implemented, it is necessary for comprehension of the story to be assessed. Two approaches are recommended. The first is to have the student complete a checklist or answer questions in at the end of the story. The other is to have the student role play and demonstrate what he or she will do the next time the situation occurs. Once comprehension has been assessed, a daily implementation schedule should be created. It should be noted that there are no limitations on how long a student can use a social story. Some students will learn a new social behavior quickly while others will need to read their stories for several weeks. A critical feature of implementing a social story is monitoring student progress and collecting data to evaluate improved social outcomes. The following steps are recommended when developing and implementing a social story intervention.
- Identify the need for behavioral intervention.
- Define the inappropriate behavior.
- Define an alternative positive behavior.
- Write the story using the social story format.
- Include the social story in the child’s behavior plan.
- Implement the social story.
- Practice the social skill used in the social story.
- Evaluate comprehension.
- Remind the child where the social skill should be used.
- Prompt the child to use the social skill at appropriate times during the day.
- Affirm the child when they use the appropriate social behavior.
- Monitor Progress.
- Evaluate outcome.
Effectiveness of Social Stories
As we know, there are no interventions or treatments that can cure autism. In fact, there are very few that have been scientifically shown to produce significant, long-term benefits for children with ASD. Although the published research on social stories provides support for their effectiveness in reducing challenging behavior and increasing social interaction for children with ASD, it is uncertain whether they alone are responsible for long-lasting changes in social behaviors. Other strategies (e.g., reinforcement schedules, social skills training) implemented together with social stories may be required to produce desired changes in social behavior. As a result, social stories should be included as part of a multicomponent intervention in the classroom setting. While further outcome research is needed, social stories may be considered an effective approach for facilitating social skills in children with ASD.
Example of a Social Story
David, a second grader with ASD, has a difficult time waiting to talk with his teacher, repeatedly speaks out of turn and interrupts other students. When told to wait, he frequently experiences a “meltdown” and refuses to cooperate. His teacher developed a social story called “Waiting My Turn to Talk.”
Waiting My Turn to Talk
- At school I like to talk to the teacher and other students. (descriptive sentence)
- Many times other students want to talk with the teacher too. (descriptive sentence)
- Students cannot talk to the teacher at the same time. (descriptive sentence)
- I will wait my turn to talk (directive sentence )
- When it is not my turn, I will try to listen to what others are saying and not interrupt.(directive sentence)
- These are good rules to follow (affirmative sentence)
- The teacher will help me by calling my name when it is my turn to talk (cooperative sentence)
- My teacher is happy when I am a good listener and wait for my turn to talk. (perspective sentence)
- The other kids will like me when I wait my turn and don’t interrupt them. (perspective sentence)
- I will try to remember to be a good listener and wait for my turn to talk. (control sentence)
David’s Comprehension Questions
- When should I talk to my teacher?
- What should I do when other students are talking?
- Will my teacher and the other kids be happy if I wait my turn to talk?
Recommended Readings and Resources
Crozier, S., & Sileo, N. M. (2005). Encouraging positive behavior with social stories: An intervention for children with autism spectrum disorders. TEACHING Exceptional Children, 37, 26-31.
Gray, C. A. (2000). Writing social stories with Carol Gray [Videotape and workbook]. Arlington, TX: Future Horizons.
Gray, C. A. (2000). The new social story book. Arlington, TX: Future Horizons.
Gray, C. A. (2002). My Social Stories Book. London: Jessica Kingsley Publishers.
Sansosti, F. J., & Powell-Smith, K. A. (2006). Using social stories to improve the social behavior of children with Asperger syndrome. Journal of Positive Behavior Interventions, 8(1), 43–57.
Spencer, V. G., Simpson, C. G., & Lynch, S. A. (2008). Using social stories to increase positive behaviors for children with autism spectrum disorders. Intervention in School and Clinic, 44, 58-61.
Lee A. Wilkinson, PhD, CCBT, NCSP is author of the award-winning book, A Best Practice Guide to Assessment and Intervention for Autism and Asperger Syndrome in Schools, published by Jessica Kingsley Publishers. Dr. Wilkinson is also editor of a recent volume in the APA School Psychology Book Series, Autism Spectrum Disorder in Children and Adolescents: Evidence-Based Assessment and Intervention in Schools and author of the new book book, Overcoming Anxiety and Depression on the Autism Spectrum: ASelf-Help Guide Using CBT.
© 2013 Lee A. Wilkinson, PhD | http://bestpracticeautism.blogspot.com/ |
4.125 | One of the most important components of climate change is the variation of temperature at the Earth's surface. Because temperature changes at the surface affect the distribution of temperature in the subsurface, ground temperatures comprise an archive of signal of past climate.
Thermal regime at shallow depths of the crust is controlled by the temperature condition at the surface and the heat flowing from deeper part of the Earth. In an idealized homogeneous crust, if the surface temperature is steady, the distribution of ground temperature is a linear function of depth. However, if the surface temperature changes with time, the ground temperature will depart from the linear distribution which is governed by heat flow (q) and thermal conductivity (k). A progressive cooling at the surface will cool down the rocks near to the surface, increase the thermal gradient at shallow depths, and lead to a temperature profile with curvature like the one shown in green in the illustration above. A progressive warming, on the other hand should be responsible for a temperature profile with smaller even negative thermal gradients at shallower depths like the one shown in red. If the surface temperature oscillates with time, oscillations in the ground temperature will follow. The magnitude of the departure of ground temperature from its undisturbed steady state is related to the amplitude of the surface temperature variation, and the depth to which disturbances to the steady state temperature can be measured is related to the timing of the original temperature change at the surface. A ground surface temperature history is therefore recorded in the subsurface. By careful analysis of the variation of temperature with depth, one can reconstruct the past fluctuation at the Earth's surface.
Downloaded Saturday, 13-Feb-2016 20:44:11 EST
Last Updated Wednesday, 20-Aug-2008 11:22:25 EDT by [email protected]
Please see the Paleoclimatology Contact Page or the NCDC Contact Page if you have questions or comments. | http://www.ncdc.noaa.gov/paleo/borehole/approach.html |
4.125 | Money can be defined as an object that acts as a physical medium of exchange in transactions. Each country has its own currency and some currencies are shared by few countries (For example, European Countries use "EURO" as a medium of exchange in transactions).
The word "money" is originated from the temple of Hera, which is located in Rome. Money is very essential for human to survive the life and it is mandatory to purchase basic needful things such as clothes, food and shelter. Now-a-days, value is given to a person based on his/her wealth and status. Money system varies from one country to other. The word money is determined in three forms, which include,
- Commodity money
- Fiat money
- Fiduciary money
Money serves four essential functions:
Medium of exchange:
Money can be described as an intermediary object that commonly used in payment for buying goods and services. In addition, it provides more trade in economy. Medium of exchange must possess the following attributes:
- Value common assets
- Constant utility
- Low cost of preservation
- High market value
- Resistance to counterfeiting
Unit of Account:
'Unit of Account' is a resource which is defined as a standard numerical unit value that is assigned for goods and services to evaluate which is more valuable. Money has to be divided into smaller units such as fungible and specific weight in order to function it as a 'unit of account'.
Store of Value
'Store of value', act as a fundamental component for modern economic system, in which some medium is essential in order to exchange goods and services. Any physical asset, commodity is marked as a store of value.
Standard of deferred Payments
This component is not considered as a main function by all economists even though it is determined in some works. Money serves as a standard unit for defining future payments based on current expenses, which is nothing but (purchasing now and paying later).
In Barter system, goods and services are exchanged without involvement of money. Hence, monetary economy is the remarkable advancement over the barter system. | http://www.altiusdirectory.com/Money/ |
4.1875 | Constitution, the body of doctrines and practices that form the fundamental organizing principle of a political state. In some cases, such as the United States, the constitution is a specific written document; in others, such as the United Kingdom, it is a collection of documents, statutes, and traditional practices that are generally accepted as governing political matters. States that have a written constitution may also have a body of traditional or customary practices that may or may not be considered to be of constitutional standing. Virtually every state claims to have a constitution, but not every government conducts itself in a consistently constitutional manner.
The general idea of a constitution and of constitutionalism originated with the ancient Greeks and especially in the systematic, theoretical, normative, and descriptive writings of Aristotle. In his Politics, Nicomachean Ethics, Constitution of Athens, and other works, Aristotle used the Greek word for constitution (politeia) in several different senses. The simplest and most neutral of these was “the arrangement of the offices in a polis” (state). In this purely descriptive sense of the word, every state has a constitution, no matter how badly or erratically governed it may be.
This article deals with the theories and classical conceptions of constitutions as well as the features and practice of constitutional government throughout the world.
Theories about constitutions
Aristotle’s classification of the “forms of government” was intended as a classification of constitutions, both good and bad. Under good constitutions—monarchy, aristocracy, and the mixed kind to which Aristotle applied the same term politeia—one person, a few individuals, or the many rule in the interest of the whole polis. Under the bad constitutions—tyranny, oligarchy, and democracy—the tyrant, the rich oligarchs, or the poor dēmos, or people, rule in their own interest alone.
Aristotle regarded the mixed constitution as the best arrangement of offices in the polis. Such a politeia would contain monarchic, aristocratic, and democratic elements. Its citizens, after learning to obey, were to be given opportunities to participate in ruling. This was a privilege only of citizens, however, since neither noncitizens nor slaves would have been admitted by Aristotle or his contemporaries in the Greek city-states. Aristotle regarded some humans as natural slaves, a point on which later Roman philosophers, especially the Stoics and jurists, disagreed with him. Although slavery was at least as widespread in Rome as in Greece, Roman law generally recognized a basic equality among all humans. This was because, the Stoics argued, all humans are endowed by nature with a spark of reason by means of which they can perceive a universal natural law that governs all the world and can bring their behaviour into harmony with it.
Roman law thus added to Aristotelian notions of constitutionalism the concepts of a generalized equality, a universal regularity, and a hierarchy of types of laws. Aristotle had already drawn a distinction between the constitution (politeia), the laws (nomoi), and something more ephemeral that corresponds to what could be described as day-to-day policies (psēphismata). The latter might be based upon the votes cast by the citizens in their assembly and might be subject to frequent changes, but nomoi, or laws, were meant to last longer. The Romans conceived of the all-encompassing rational law of nature as the eternal framework to which constitutions, laws, and policies should conform—the constitution of the universe.
Influence of the church
Christianity endowed this universal constitution with a clearly monarchical cast. The Christian God, it came to be argued, was the sole ruler of the universe, and his laws were to be obeyed. Christians were under an obligation to try to constitute their earthly cities on the model of the City of God.
Both the church and the secular authorities with whom the church came into conflict in the course of the Middle Ages needed clearly defined arrangements of offices, functions, and jurisdictions. Medieval constitutions, whether of church or state, were considered legitimate because they were believed to be ordained of God or tradition or both. Confirmation by officers of the Christian Church was regarded as a prerequisite of the legitimacy of secular rulers. Coronation ceremonies were incomplete without a bishop’s participation. The Holy Roman emperor travelled to Rome in order to receive his crown from the pope. Oaths, including the coronation oaths of rulers, could be sworn only in the presence of the clergy because oaths constituted promises to God and invoked divine punishment for violations. Even in an imposition of a new constitutional order, novelty could always be legitimized by reference to an alleged return to a more or less fictitious “ancient constitution.” It was only in Italy during the Renaissance and in England after the Reformation that the “great modern fallacy” (as the Swiss historian Jacob Burckhardt called it) was established, according to which citizens could rationally and deliberately adopt a new constitution to meet their needs. | http://www.britannica.com/topic/constitution-politics-and-law |
4.21875 | The Seebeck coefficient (also known as thermopower, thermoelectric power, and thermoelectric sensitivity) of a material is a measure of the magnitude of an induced thermoelectric voltage in response to a temperature difference across that material, as induced by the Seebeck effect. The SI unit of the Seebeck coefficient is volts per kelvin (V/K), although it is more often given in microvolts per kelvin (μV/K).
The use of materials with a high Seebeck coefficient is one of many important factors for the efficient behaviour of thermoelectric generators and thermoelectric coolers. More information about high-performance thermoelectric materials can be found in the Thermoelectric materials article. In thermocouples the Seebeck effect is used to measure temperatures, and for accuracy it is desirable to use materials with a Seebeck coefficient that is stable over time.
Physically, the magnitude and sign of the Seebeck coefficient can be approximately understood as being given by the entropy per unit charge carried by electrical currents in the material. It may be positive or negative. In conductors that can be understood in terms of independently moving, nearly-free charge carriers, the Seebeck coefficient is negative for negatively charged carriers (such as electrons), and positive for positively charged carriers (such as electron holes).
- 1 Definition
- 2 Measurement
- 3 Seebeck coefficients for some common materials
- 4 Physical factors that determine the Seebeck coefficient
- 5 References
One way to define the Seebeck coefficient is the voltage built up when a small temperature gradient is applied to a material, and when the material has come to a steady state where the current density is zero everywhere. If the temperature difference ΔT between the two ends of a material is small, then the Seebeck coefficient of a material is defined as:
where ΔV is the thermoelectric voltage seen at the terminals. (See below for more on the signs of ΔV and ΔT.)
Note that the voltage shift expressed by the Seebeck effect cannot be measured directly, since the measured voltage (by attaching a voltmeter) contains an additional voltage contribution, due to the temperature gradient and Seebeck effect in the measurement leads. The voltmeter voltage is always dependent on relative Seebeck coefficients among the various materials involved.
Most generally and technically, the Seebeck coefficient is defined in terms of the portion of electric current driven by temperature gradients, as in the vector differential equation
where is the current density, is the electrical conductivity, is the voltage gradient, and is the temperature gradient. The zero-current, steady state special case described above has , which implies that the two current density terms have cancelled out and so .
The sign is made explicit in the following expression:
Thus, if S is positive, the end with the higher temperature has the lower voltage, and vice versa. The voltage gradient in the material will point against the temperature gradient.
The Seebeck effect is generally dominated by the contribution from charge carrier diffusion (see below) which tends to push charge carriers towards the cold side of the material until a compensating voltage has built up. As a result, in p-type semiconductors (which have only positive mobile charges, electron holes), S is positive. Likewise, in n-type semiconductors (which have only negative mobile charges, electrons), S is negative. In most conductors, however, the charge carriers exhibit both hole-like and electron-like behaviour and the sign of S usually depends on which of them predominates.
Relationship to other thermoelectric coefficients
According to the second Thomson relation (which holds for all non-magnetic materials in the absence of an externally applied magnetic field), the Seebeck coefficient is related to the Peltier coefficient by the exact relation
where is the thermodynamic temperature.
Relative Seebeck coefficient
In practice the absolute Seebeck coefficient is difficult to measure directly, since the voltage output of a thermoelectric circuit, as measured by a voltmeter, only depends on differences of Seebeck coefficients. This is because electrodes attached to a voltmeter must be placed onto the material in order to measure the thermoelectric voltage. The temperature gradient then also typically induces a thermoelectric voltage across one leg of the measurement electrodes. Therefore the measured Seebeck coefficient is a contribution from the Seebeck coefficient of the material of interest and the material of the measurement electrodes. This arrangement of two materials is usually called a thermocouple.
The measured Seebeck coefficient is then a contribution from both and can be written as:
Absolute Seebeck coefficient
Although only relative Seebeck coefficients are important for externally measured voltages, the absolute Seebeck coefficient can be important for other effects where voltage is measured indirectly. Determination of the absolute Seebeck coefficient therefore requires more complicated techniques and is more difficult, however such measurements have been performed on standard materials. These measurements only had to be performed once for all time, and for all materials; for any other material, the absolute Seebeck coefficient can be obtained by performing a relative Seebeck coefficient measurement against a standard material.
A measurement of the Thomson coefficient , which expresses the strength of the Thomson effect, can be used to yield the absolute Seebeck coefficient through the relation: , provided that is measured down to absolute zero. The reason this works is that is expected to decrease to zero as the temperature is brought to zero—a consequence of Nernst's theorem. Such a measurement based on the integration of was published in 1932, though it relied on the interpolation of the Thomson coefficient in certain regions of temperature.
Superconductors have zero Seebeck coefficient, as mentioned below. By making one of the wires in a thermocouple superconducting, it is possible to get a direct measurement of the absolute Seebeck coefficient of the other wire, since it alone determines the measured voltage from the entire thermocouple. A publication in 1958 used this technique to measure the absolute Seebeck coefficient of lead between 7.2 K and 18 K, thereby filling in an important gap in the previous 1932 experiment mentioned above.
The combination of the superconductor-thermocouple technique up to 18 K, with the Thomson-coefficient-integration technique above 18 K, allowed determination of the absolute Seebeck coefficient of lead up to room temperature. By proxy, these measurements led to the determination of absolute Seebeck coefficients for all materials, even up to higher temperatures, by a combination of Thomson coefficient integrations and thermocouple circuits.
The difficulty of these measurements, and the rarity of reproducing experiments, lends some degree of uncertainty to the absolute thermoelectric scale thus obtained. In particular, the 1932 measurements may have incorrectly measured the Thomson coefficient over the range 20 K to 50 K. Since nearly all subsequent publications relied on those measurements, this would mean that all of the commonly used values of absolute Seebeck coefficient (including those shown in the figures) are too low by about 0.3 μV/K, for all temperatures above 50 K.
Seebeck coefficients for some common materials
In the table below are Seebeck coefficients at room temperature for some common, nonexotic materials, measured relative to platinum. The Seebeck coefficient of platinum itself is approximately −5 μV/K at room temperature, and so the values listed below should be compensated accordingly. For example, the Seebeck coefficients of Cu, Ag, Au are 1.5 μV/K, and of Al −1.5 μV/K. The Seebeck coefficient of semiconductors very much depends on doping, with generally positive values for p doped materials and negative values for n doping.
relative to platinum (μV/K)
|Gold, silver, copper||6.5|
Physical factors that determine the Seebeck coefficient
A material's temperature, crystal structure, and impurities influence the value of thermoelectric coefficients. The Seebeck effect can be attributed to two things: charge-carrier diffusion and phonon drag.
Charge carrier diffusion
On a fundamental level, an applied voltage difference refers to a difference in the thermodynamic chemical potential of charge carriers, and the direction of the current under a voltage difference is determined by the universal thermodynamic process in which (given equal temperatures) particles flow from high chemical potential to low chemical potential. In other words, the direction of the current in Ohm's law is determined via the thermodynamic arrow of time (the difference in chemical potential could be exploited to produce work, but is instead dissipated as heat which increases entropy). On the other hand, for the Seebeck effect not even the sign of the current can be predicted from thermodynamics, and so to understand the origin of the Seebeck coefficient it is necessary to understand the microscopic physics.
Charge carriers (such as thermally excited electrons) constantly diffuse around inside a conductive material. Due to thermal fluctuations, some of these charge carriers travel with a higher energy than average, and some with a lower energy. When no voltage differences or temperature differences are applied, the carrier diffusion perfectly balances out and so on average one sees no current: . A net current can be generated by applying a voltage difference (Ohm's law), or by applying a temperature difference (Seebeck effect). To understand the microscopic origin of the thermoelectric effect, it is useful to first describe the microscopic mechanism of the normal Ohm's law electrical conductance—to describe what determines the in . Microscopically, what is happening in Ohm's law is that higher energy levels have a higher concentration of carriers per state, on the side with higher chemical potential. For each interval of energy, the carriers tend to diffuse and spread into the area of device where there are less carriers per state of that energy. As they move, however, they occasionally scatter dissipatively, which re-randomizes their energy according to the local temperature and chemical potential. This dissipation empties out the carriers from these higher energy states, allowing more to diffuse in. The combination of diffusion and dissipation favours an overall drift of the charge carriers towards the side of the material where they have a lower chemical potential.:Ch.11
For the thermoelectric effect, now, consider the case of uniform voltage (uniform chemical potential) with a temperature gradient. In this case, at the hotter side of the material there is more variation in the energies of the charge carriers, compared to the colder side. This means that high energy levels have a higher carrier occupation per state on the hotter side, but also the hotter side has a lower occupation per state at lower energy levels. As before, the high-energy carriers diffuse away from the hot end, and produce entropy by drifting towards the cold end of the device. However, there is a competing process: at the same time low-energy carriers are drawn back towards the hot end of the device. Though these processes both generate entropy, they work against each other in terms of charge current, and so a net current only occurs if one of these drifts is stronger than the other. The net current is given by , where (as shown below) the thermoelectric coefficient depends literally on how conductive high-energy carriers are, compared to low-energy carriers. The distinction may be due to a difference in rate of scattering, a difference in speeds, a difference in density of states, or a combination of these effects.
The processes described above apply in materials where each charge carrier sees an essentially static environment so that its motion can be described independently from other carriers, and independent of other dynamics (such as phonons). In particular, in electronic materials with weak electron-electron interactions, weak electron-phonon interactions, etc. it can be shown in general that the linear response conductance is
and the linear response thermoelectric coefficient is
where is the energy-dependent conductivity, and is the Fermi–Dirac distribution function. These equations are known as the Mott relations, of Sir Nevill Francis Mott. The derivative is a function peaked around the chemical potential (Fermi level) with a width of approximately . The energy-dependent conductivity (a quantity that cannot actually be directly measured — one only measures ) is calculated as where is the electron diffusion constant and is the electronic density of states (in general, both are functions of energy).
In materials with strong interactions, none of the above equations can be used since it is not possible to consider each charge carrier as a separate entity. The Wiedemann–Franz law can also be exactly derived using the non-interacting electron picture, and so in materials where the Wiedemann–Franz law fails (such as superconductors), the Mott relations also generally tend to fail.
The formulae above can be simplified in a couple of important limiting cases:
Mott formula in metals
This expression is sometimes called "the Mott formula", however it is much less general than Mott's original formula expressed above.
In the Drude–Sommerfeld degenerate free electron gas with scattering, the value of is of order , where is the Fermi temperature, and so a typical value of the Seebeck coefficient in the Fermi gas is (the prefactor varies somewhat depending on details such as dimensionality and scattering). In highly conductive metals the Fermi temperatures are typically around 104 – 105 K, and so it is understandable why their absolute Seebeck coefficients are only of order 1 – 10 μV/K at room temperature. Note that whereas the free electron gas is expected to have a negative Seebeck coefficient, real metals actually have complicated band structures and may exhibit positive Seebeck coefficients (examples: Cu, Ag, Au).
The fraction in semimetals is sometimes calculated from the measured derivative of with respect to some energy shift induced by field effect. This is not necessarily correct and the estimate of can be incorrect (by a factor of two or more), since the disorder potential depends on screening which also changes with field effect.
Mott formula in semiconductors
In semiconductors at low levels of doping, transport only occurs far away from the Fermi level. At low doping in the conduction band (where , where is the minimum energy of the conduction band edge), one has . Approximating the conduction band levels' conductivity function as for some constants and ,
whereas in the valence band when and ,
The values of and depend on material details; in bulk semiconductor these constants range between 1 and 3, the extremes corresponding to acoustic-mode lattice scattering and ionized-impurity scattering.
In extrinsic (doped) semiconductors either the conduction or valence band will dominate transport, and so one of the numbers above will give the measured values. In general however the semiconductor may also be intrinsic in which case the bands conduct in parallel, and so the measured values will be
The highest Seebeck coefficient is obtained when the semiconductor is lightly doped, however a high Seebeck coefficient is not necessarily useful. For thermoelectric power devices (coolers, generators) it is more important to maximize the thermoelectric power factor , or the thermoelectric figure of merit, and the optimum generally occurs at high doping levels.
Phonons are not always in local thermal equilibrium; they move against the thermal gradient. They lose momentum by interacting with electrons (or other carriers) and imperfections in the crystal. If the phonon-electron interaction is predominant, the phonons will tend to push the electrons to one end of the material, hence losing momentum and contributing to the thermoelectric field. This contribution is most important in the temperature region where phonon-electron scattering is predominant. This happens for
where is the Debye temperature. At lower temperatures there are fewer phonons available for drag, and at higher temperatures they tend to lose momentum in phonon-phonon scattering instead of phonon-electron scattering. This region of the thermopower-versus-temperature function is highly variable under a magnetic field.
Relationship with entropy
The Seebeck coefficient of a material corresponds thermodynamically to the amount of entropy "dragged along" by the flow of charge inside a material; it is in some sense the entropy per unit charge in the material.
Superconductors have zero Seebeck coefficient, because the current-carrying charge carriers (Cooper pairs) have no entropy; hence, the transport of charge carriers (the supercurrent) has zero contribution from any temperature gradient that might exist to drive it.
- Thermopower is a misnomer as this quantity does not actually express a power quantity: Note that the unit of thermopower (V/K) is different from the unit of power (watts).
- Concepts in Thermal Physics, by Katherine M. Blundell Weblink through Google books
- Borelius, G.; Keesom, W. H.; Johannson, C. H.; Linde, J. O. (1932). "Establishment of an Absolute Scale for the Thermo-electric Force". Proceedings of the Royal Academy of Sciences at Amsterdam 35 (1): 10.
- Christian, J. W.; Jan, J.-P.; Pearson, W. B.; Templeton, I. M. (1958). "Thermo-Electricity at Low Temperatures. VI. A Redetermination of the Absolute Scale of Thermo-Electric Power of Lead". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 245 (1241): 213. Bibcode:1958RSPSA.245..213C. doi:10.1098/rspa.1958.0078.
- Cusack, N.; Kendall, P. (1958). "The Absolute Scale of Thermoelectric Power at High Temperature". Proceedings of the Physical Society 72 (5): 898. doi:10.1088/0370-1328/72/5/429.
- Roberts, R. B. (1986). "Absolute scales for thermoelectricity". Measurement 4 (3): 101–103. doi:10.1016/0263-2241(86)90016-3.
- The Seebeck Coefficient, Electronics Cooling.com (accessed 2013-Feb-01)
- Moore, J. P. (1973). "Absolute Seebeck coefficient of platinum from 80 to 340 K and the thermal and electrical conductivities of lead from 80 to 400 K". Journal of Applied Physics 44 (3): 1174. doi:10.1063/1.1662324.
- Kong, Ling Bing. Waste Energy Harvesting. Springer. pp. 263–403. ISBN 978-3-642-54634-1.
- Datta, Supriyo (2005). Quantum Transport: Atom to Transistor. Cambridge University Presss. ISBN 9780521631457.
- Cutler, M.; Mott, N. (1969). "Observation of Anderson Localization in an Electron Gas". Physical Review 181 (3): 1336. Bibcode:1969PhRv..181.1336C. doi:10.1103/PhysRev.181.1336.
- Jonson, M.; Mahan, G. (1980). "Mott's formula for the thermopower and the Wiedemann-Franz law". Physical Review B 21 (10): 4223. doi:10.1103/PhysRevB.21.4223.
- Hwang, E. H.; Rossi, E.; Das Sarma, S. (2009). "Theory of thermopower in two-dimensional graphene". Physical Review B 80 (23). doi:10.1103/PhysRevB.80.235415.
- Semiconductor Physics: An Introduction, Karlheinz Seeger
- G. Jeffrey Snyder, "Thermoelectrics". http://www.its.caltech.edu/~jsnyder/thermoelectrics/
- Bulusu, A.; Walker, D. G. (2008). "Review of electronic transport models for thermoelectric materials". Superlattices and Microstructures 44: 1. doi:10.1016/j.spmi.2008.02.008. | https://en.wikipedia.org/wiki/Thermopower |