score
float64 4
5.34
| text
stringlengths 256
572k
| url
stringlengths 15
373
|
---|---|---|
4.0625 | Tidal friction, in astronomy, strain produced in a celestial body (such as the Earth or Moon) that undergoes cyclic variations in gravitational attraction as it orbits, or is orbited by, a second body. Friction occurs between water tides and sea bottoms, particularly where the sea is relatively shallow, or between parts of the solid crust of planet or satellite that move against each other. Tidal friction on the Earth prevents the tidal bulge, which is raised in Earth’s seas and crust by the Moon’s pull, from staying directly under the Moon. Instead, the bulge is carried out from directly under the Moon by the rotation of the Earth, which spins almost 30 times for every time the Moon revolves in its orbit. The mutual attraction between the Moon and the material in the bulge tends to accelerate the Moon in its orbit, thereby moving the Moon farther from Earth by about three centimetres (1.2 inches) per year, and to slow Earth’s daily rotation by a small fraction of a second per year. Millions of years from now these effects may cause the Earth to keep the same face always turned to a distant Moon and to rotate once in a day about 50 times longer than the present one and equal to the month of that time. This condition probably will not be stable, due to the tidal effects of the Sun on the Earth–Moon system.
That the Moon keeps the same part of its surface always turned toward Earth is attributed to the past effects of tidal friction in the Moon. The theory of tidal friction was first developed mathematically after 1879 by the English astronomer George Darwin (1845–1912), son of the naturalist Charles Darwin. | http://www.britannica.com/topic/tidal-friction |
4.125 | The resource has been added to your collection
Worksheets and activities for teaching and reinforcing the use of nouns for English, Spanish, German and ESL students.
This resource was reviewed using the Curriki Review rubric and received an overall Curriki Review System rating of 3, as of 2013-01-20.
This collection contains 9 grammar resources, ranging from a worksheet identifying parts of speech, to a booklet of handouts for ESL students about nouns, to worksheets in PPT formats for whole class instruction, to a beginner level German language learning handout about the gender and number of nouns, to worksheets for ESL students about English plural spelling changes and nouns, to lessons for first graders learning Spanish words about family and rules for masculine and feminine endings, to some pull down menu multiple choice noun worksheets and to an external link to a variety of grammar worksheets. This rich collection will help students reinforce their understanding about nouns.
Not Rated Yet. | http://www.curriki.org/oer/Noun-Worksheets/ |
4.25 | EARTH is starting to crumble under the strain of climate change.
Over the last decade, rock avalanches and landslides have become more common in high mountain ranges, apparently coinciding with the increase in exceptionally warm periods (see “Early signs”). The collapses are triggered by melting glaciers and permafrost, which remove the glue that holds steep mountain slopes together.
Worse may be to come. Thinning glaciers on volcanoes could destabilise vast chunks of their summit cones, triggering mega-landslides capable of flattening cities such as Seattle and devastating local infrastructure.
For Earth this phenomenon is nothing new, but the last time it happened, few humans were around to witness it. Several studies have shown that around 10,000 years ago, as the planet came out of the last ice age, vast portions of volcanic summit cones collapsed, leading to enormous landslides.
To assess the risk of this happening again, Daniel Tormey of ENTRIX, an environmental consultancy based in Los Angeles, studied a huge landslide that occurred 11,000 years ago on Planchón-Peteroa. He focused on this glaciated volcano in Chile because its altitude and latitude make it likely to feel the effects of climate change before others.
“Around one-third of the volcanic cone collapsed,” Tormey says. Ten billion cubic metres of rock crashed down the mountain and smothered 370 square kilometres of land, travelling 95 kilometres in total (Global and Planetary Change, DOI: 10.1016/j.gloplacha.2010.08.003). Studies have suggested that intense rain cannot provide the lubrication needed for this to happen, so Tormey concludes that glacier melt must have been to blame.
With global temperatures on a steady rise, Tormey is concerned that history will repeat itself on volcanoes all over the world. He thinks that many volcanoes in temperate zones could be at risk, including in the Ring of Fire – the horseshoe of volcanoes that surrounds the Pacific Ocean (see map). “There are far more human settlements and activities near the slopes of glaciated active volcanoes today than there were 10,000 years ago, so the effects could be catastrophic,” he says.
The first volcanoes to go will most likely be in the Andes, where temperatures are rising fastest as a result of global warming. Any movement here could be an early sign of trouble to come elsewhere. David Pyle, a volcanologist at the University of Oxford, agrees. “This is a real risk and a particularly serious hazard along the Andes,” he says.
Meanwhile, ongoing studies by Bill McGuire of University College London and Rachel Lowe at the University of Exeter, UK, are showing that non-glaciated volcanoes could also be at greater risk of catastrophic collapse if climate change increases rainfall.
“We have found that 39 cities with populations greater than 100,000 are situated within 100 kilometres of a volcano that has collapsed in the past and which may, therefore, be capable of collapsing in the future,” says McGuire.
Mount Cook (Aoraki), New Zealand
Just after midnight on 14 December 1991, 12 million cubic metres of rock and ice peeled away from the summit of New Zealand’s highest mountain. The landslide travelled 7.5 kilometres and narrowly missed slumbering hikers in an alpine hut. It occurred after an exceptionally warm week, when temperatures were 8.5 °C above average, and reduced the height of the mountain by around 10 metres.
Mount Dzhimarai-Khokh, Russia
More than 100 people were killed on 20 September 2002 when their villages were swept away after part of the peak, in the north Caucasus mountains, collapsed. Over 100 million cubic metres of debris travelled 20 kilometres. Warming permafrost is thought to have been partly to blame.
Mount Rosa, Italy
Following an unusual spring heatwave across Europe in 2007, the Alpine mountain suffered a spectacular rock avalanche, in which 300,000 cubic metres of rock fell, landing in a dry seasonal lake. Had the lake contained water, the avalanche would have generated a massive outpouring, with catastrophic consequences for the village of Macugnaga downstream.
More on these topics: | https://www.newscientist.com/article/mg20827825.100-a-warming-world-could-leave-cities-flattened?full=true |
4.03125 | Beacon Lesson Plan Library
Meet the Five Food Groups
Bay District Schools
This lesson is designed to invite first grade students to identify the five food groups and the foods within each group as shown on the food pyramid.
The student describes a wide variety of classification schemes and patterns related to physical characteristics and sensory attributes, such as rhythm, sound, shapes, colors, numbers, similar objects, similar events.
The student classifies food and food combinations according to the Food Guide Pyramid.
-Books such as
Loreen Leedy. [The Edible Pyramid: Good Eating Every Day]. Holiday House, 1994.
Sharmat, Mitchell. [Gregory the Terrible Eater]. Scholastic, l980.
-Manipulatives of foods from the food pyramid (See Preparations)
-Paper and pencil
-Food pictures or empty food packages representing each of the five food groups
-Chart paper with the Food Guide Pyramid drawn on it
-Student copies of a blank Food Guide Pyramid
1. Research background information on the five food groups and why there are five instead of four. Research key nutrients for each group. Note that nutrient-rich foods are now classified into five food groups: milk, meat, vegetable, fruit, and grain.
2. Gather manipulatives of foods for students to handle. Lakeshore Learning Materials (www.lakeshorelearning.com) has a Food and Nutrition Theme Box (item # LA452) which contains various manipulative food items.
3. Cut out pictures of food from magazines.
4. Acquire magazines from which students can also cut out pictures of food.
5. Prepare a chart with the Food Guide Pyramid for the lesson.
6. Prepare individual copies of the Food Guide Pyramid to distribute to students.
1. Introduce lesson by brainstorming with students what they know about the five food groups and the Food Guide Pyramid using the K-W-L procedure. Ask students what they (K)now about the five food groups and the food pyramid. Ask students what they (W)ant to learn about the five food groups and the Food Guide Pyramid. Read [The Edible Pyramid: Good Eating Every Day] to the entire class.
2. Explain: “You may have learned that there are four food groups, but there has been a change. Now, there are five food groups.”
3. Introduce pictures of the five food groups, what foods are included in them, and the Food Guide Pyramid. Let the students handle the manipulative food items.
4. Students take turns placing food items in the Food Guide Pyramid. With markers, draw a Food Guide Pyramid on chart paper. Students then choose pictures to place (glue) on the Food Guide Pyramid.
5. Give students paper with a blank Food Guide Pyramid and encourage them to cut out pictures from magazines to place on their own pyramid.
6. Discuss: “What do foods from the milk group have in common? Why don't eggs belong to the milk group? What characteristics do foods from each group have in common?”
7.Conclude the activity by reading one of the suggested books such as [Gregory the Terrible Eater]. Ask students, “What have you
(L)earned about the five food groups and the Food Guide Pyramid?” Students record reponses in their journals.
1. Students cut 10 pictures of food from magazines and place 8 out of 10 in the correct places within their Food Guide Pyramid.
2. Students categorize and sort out foods into the five food groups.
3. Students recite the acronym to teacher. (See Extensions)
4. Students write in their journals explaining their own Food Guide Pyramid and how they classified foods on it.
After the lesson, students learn an acronym to help them remember the five food groups. | http://www.beaconlearningcenter.com/lessons/lesson.asp?ID=169 |
4.125 | Technology Helps Autistic Children with Social Skills
A new research project suggests virtual worlds can help autistic children develop social skills beyond their anticipated levels.
In the study, called the Echoes Project, scientists developed an interactive environment that uses multi-touch screen technology to project scenarios to children.
The technology allows researchers to study a child’s actions to new situations in real time.
During sessions in the virtual environment, primary school children experiment with different social scenarios, allowing the researchers to compare their reactions with those they display in real-world situations.
“Discussions of the data with teachers suggest a fascinating possibility,” said project leader Kaska Porayska-Pomsta, Ph.D.
“Learning environments such as Echoes may allow some children to exceed their potential, behaving and achieving in ways that even teachers who knew them well could not have anticipated.”
“A teacher observing a child interacting in such a virtual environment may gain access to a range of behaviors from individual children that would otherwise be difficult or impossible to observe in a classroom,” she added.
Early findings from this research show that practice with various scenarios has improved the quality of the interaction for some of the children.
Researchers believe the virtual environment and an increased ability to manage their own behavior enables a child to concentrate on following a virtual character’s gaze or to focus on a pointing gesture, thus developing the skills vital for good communication and effective learning.
The findings could prove particularly useful in helping children with autism to develop skills they normally find difficult.
Porayska-Pomsta said: “Since autistic children have a particular affinity with computers, our research shows it may be possible to use digital technology to help develop their social skills.
“The beauty of it is that there are no real-world consequences, so children can afford to experiment with different social scenarios without real-world risks,” she added.
“In the longer term, virtual platforms such as the ones developed in the Echoes project could help young children to realize their potential in new and unexpected ways,” Porayska-Pomsta said.
Nauert PhD, R. (2015). Technology Helps Autistic Children with Social Skills. Psych Central. Retrieved on February 12, 2016, from http://psychcentral.com/news/2011/10/24/technology-helps-autistic-children-with-social-skills/30648.html | http://psychcentral.com/news/2011/10/24/technology-helps-autistic-children-with-social-skills/30648.html |
4.21875 | This resource is a set of instructional materials developed to help beginning physics students build a solid understanding of vector algebra. It contains two lecture presentations in PDF format and a companion assessment. It gives an overview of terminology, vector notation, and a variety of methods for solving problems relating to vectors. One of the authors' goals is to help students differentiate between the uses of vectors in mathematics vs. physics.
This resource is part of a collection developed by the NSF-funded Mathematics Across the Community College Curriculum (MAC 3).
This resource is part of a Physics Front Topical Unit.
Topic: Kinematics: The Physics of Motion Unit Title: Vectors
This is an exemplary set of Power Point materials for teachers to introduce vector basics, including vector addition/subtraction and how to calculate vector components. See Assessments below for a companion unit test. All may be freely downloaded. To read about the underlying pedagogy employed by the authors, go to Reference Material below and click on Bridging the Vector Calculus Gap.
%0 Electronic Source %A Friesen, Larry %A Gillis, Anne %D February 4, 2008 %T MAC3: Vectors %V 2016 %N 12 February 2016 %8 February 4, 2008 %9 application/pdf %U http://www.mac3.matyc.org/vectors/vectors_main.html
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. | http://www.thephysicsfront.org/items/detail.cfm?ID=8391 |
4.25 | |This article needs additional citations for verification. (February 2016)|
In plate tectonics, a divergent boundary or divergent plate boundary (also known as a constructive boundary or an extensional boundary) is a linear feature that exists between two tectonic plates that are moving away from each other. Divergent boundaries within continents initially produce rifts which eventually become rift valleys. Most active divergent plate boundaries occur between oceanic plates and exist as mid-oceanic ridges. Divergent boundaries also form volcanic islands which occur when the plates move apart to produce gaps which molten lava rises to fill.
Current research indicates that complex convection within the Earth's mantle allows material to rise to the base of the lithosphere beneath each divergent plate boundary. This supplies the area with vast amounts of heat and a reduction in pressure that melts rock from the asthenosphere (or upper mantle) beneath the rift area forming large flood basalt or lava flows. Each eruption occurs in only a part of the plate boundary at any one time, but when it does occur, it fills in the opening gap as the two opposing plates move away from each other.
Over millions of years, tectonic plates may move many hundreds of kilometers away from both sides of a divergent plate boundary. Because of this, rocks closest to a boundary are younger than rocks further away on the same plate.
At divergent boundaries, two plates move apart from each other and the space that this creates is filled with new crustal material sourced from molten magma that forms below. The origin of new divergent boundaries at triple junctions is sometimes thought to be associated with the phenomenon known as hotspots. Here, exceedingly large convective cells bring very large quantities of hot asthenospheric material near the surface and the kinetic energy is thought to be sufficient to break apart the lithosphere. The hot spot which may have initiated the Mid-Atlantic Ridge system currently underlies Iceland which is widening at a rate of a few centimeters per year.
Divergent boundaries are typified in the oceanic lithosphere by the rifts of the oceanic ridge system, including the Mid-Atlantic Ridge and the East Pacific Rise, and in the continental lithosphere by rift valleys such as the famous East African Great Rift Valley. Divergent boundaries can create massive fault zones in the oceanic ridge system. Spreading is generally not uniform, so where spreading rates of adjacent ridge blocks are different, massive transform faults occur. These are the fracture zones, many bearing names, that are a major source of submarine earthquakes. A sea floor map will show a rather strange pattern of blocky structures that are separated by linear features perpendicular to the ridge axis. If one views the sea floor between the fracture zones as conveyor belts carrying the ridge on each side of the rift away from the spreading center the action becomes clear. Crest depths of the old ridges, parallel to the current spreading center, will be older and deeper... (from thermal contraction and subsidence).
It is at mid-ocean ridges that one of the key pieces of evidence forcing acceptance of the seafloor spreading hypothesis was found. Airborne geomagnetic surveys showed a strange pattern of symmetrical magnetic reversals on opposite sides of ridge centers. The pattern was far too regular to be coincidental as the widths of the opposing bands were too closely matched. Scientists had been studying polar reversals and the link was made by Lawrence W. Morley, Frederick John Vine and Drummond Hoyle Matthews in the Morley–Vine–Matthews hypothesis. The magnetic banding directly corresponds with the Earth's polar reversals. This was confirmed by measuring the ages of the rocks within each band. The banding furnishes a map in time and space of both spreading rate and polar reversals.
- Mid-Atlantic Ridge
- Red Sea Rift
- Baikal Rift Zone
- East African Rift
- East Pacific Rise
- Gakkel Ridge
- Galapagos Rise
- Explorer Ridge
- Juan de Fuca Ridge
- Pacific-Antarctic Ridge
- West Antarctic Rift
- Great Rift Valley | https://en.wikipedia.org/wiki/Divergent_boundary |
4.15625 | The Little Red Hen is a classic story for nearly all adults, and many children. Here it is retold and enhanced in order to provide a framework for illustrating and reviewing the concepts of productive resources and incentives. After reading the story, students will categorize resources into land, labor, capital and entrepreneurship and be able to identify what future incentives the dog, the cat and the mouse will have to help the little hen in her work. Students will have the opportunity to explore bread making.
In this lesson you will be taking on the role of an an investigative reporter to solve the Amazing Farmer Mystery. The goal will be to use seven clues provided throughout the lesson in order to figure out how so few farmers can produce enough food and fiber for the nation.
In World War II pennies were made of steel and zinc instead of copper and women were working at jobs that men had always been hired to do. Why? Because during war times, scarcity forces many things to change!
The following lessons come from the Council for Economic Education's library of publications. Clicking the publication title or image will take you to the Council for Economic Education Store for more detailed information.
Designed primarily for elementary and middle school students, each of the 15 lessons in this guide introduces an economics concept through activities with modeling clay.
10 out of 17 lessons from this publication relate to this EconEdLink lesson.
This publication contains complete instructions for teaching the lessons in Choices and Changes, Grades 5-6. The Choices and Changes series is designed to help students understand how the U.S. economy works and their roles in the economy as consumers, savers and workers.
9 out of 15 lessons from this publication relate to this EconEdLink lesson.
This publication contains fourteen lessons that use a unique blend of games, simulations, and role playing to illustrate economics in a way elementary students will enjoy.
5 out of 16 lessons from this publication relate to this EconEdLink lesson. | http://www.econedlink.org/economic-standards/EconEdLink-related-publications.php?lid=229 |
4.0625 | Big Picture TV
Video length 5:04 min.Learn more about Teaching Climate Literacy and Energy Awareness»
See how this Video supports the Next Generation Science Standards»
High School: 6 Disciplinary Core Ideas, 2 Cross Cutting Concepts
About Teaching Climate Literacy
Other materials addressing 2b
Other materials addressing 2d
Other materials addressing 2f
About Teaching Climate Literacy
Other materials addressing Climate change has consequences
Notes From Our Reviewers
The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness.
Read what our review team had to say about this resource below or learn more about
how CLEAN reviews teaching materials
Teaching Tips | Science | Pedagogy |
- Use as a resource after teaching about the carbon cycle and greenhouse effect.
- Run video through once, and then restart it to eliminate commercial for classroom use.
- Educator may want to use accompanying visual examples of ocean acidification, ice albedo, and water vapor as a greenhouse gas.
About the Science
- A British scientist, who has been involved with IPCC AR4, explains amplifying feedback. The primary focus is on how water vapor functions as a greenhouse gas, but he also cites other examples of climate feedbacks - ice albedo and ocean acidification.
- Passed initial science review - expert science review pending.
About the Pedagogy
- Good and accessible explanation of what a feedback mechanism is/does and the difference between positive and negative feedback.
- This is strictly an interview with a scientist - no visuals to illustrate.
Next Generation Science Standards See how this Video supports:
Disciplinary Core Ideas: 6
HS-ESS1.B2:Cyclical changes in the shape of Earth’s orbit around the sun, together with changes in the tilt of the planet’s axis of rotation, both occurring over hundreds of thousands of years, have altered the intensity and distribution of sunlight falling on the earth. These phenomena cause a cycle of ice ages and other gradual climate changes.
HS-ESS2.A1:Earth’s systems, being dynamic and interacting, cause feedback effects that can increase or decrease the original changes.
HS-ESS2.D1:The foundation for Earth’s global climate systems is the electromagnetic radiation from the sun, as well as its reflection, absorption, storage, and redistribution among the atmosphere, ocean, and land systems, and this energy’s re-radiation into space.
HS-ESS2.D2:Gradual atmospheric changes were due to plants and other organisms that captured carbon dioxide and released oxygen.
HS-ESS2.D3:Changes in the atmosphere due to human activity have increased carbon dioxide concentrations and thus affect climate.
HS-ESS2.E1:The many dynamic and delicate feedbacks between the biosphere and other Earth systems cause a continual co-evolution of Earth’s surface and the life that exists on it.
Cross Cutting Concepts: 2
HS-C4.2:When investigating or describing a system, the boundaries and initial conditions of the system need to be defined and their inputs and outputs analyzed and described using models.
HS-C4.3:Models (e.g., physical, mathematical, computer models) can be used to simulate systems and interactions—including energy, matter, and information flows—within and between systems at different scales. | http://cleanet.org/resources/43159.html |
4.03125 | Artificial Neural Networks/Feed-Forward Networks
Feed-forward neural networks are the simplest form of ANN. Shown below, a feed-forward neural net contains only forward paths. A Multilayer Perceptron (MLP) is an example of feed-forward neural networks. The following figure below show a feed-forward networks with four hidden layers.
In a feed-forward system PE are arranged into distinct layers with each layer receiving input from the previous layer and outputting to the next layer. There is no feedback. This means that signals from one layer are not transmitted to a previous layer. This can be stated mathematically as:
Weights of direct feedback paths, from a neuron to itself, are zero. Weights from a neuron to a neuron in a previous layer are also zero. Notice that weights for the forward paths may also be zero depending on the specific network architecture, but they do not need to be. A network without all possible forward paths is known as a sparsely connected network, or a non-fully connected network. The percentage of available connections that are utilized is known as the connectivity of the network.
The weights from each neuron in layer l - 1 to the neurons in layer l are arranged into a matrix wl. Each column corresponds to a neuron in l - 1, and each row corresponds to a neuron in l. The input signal from l - 1 to l is the vector xl. If ρl is a vector of activation functions [σ1 σ2 … σn] that acts on each row of input and bl is an arbitrary offset vector (for generalization) then the total output of layer l is given as:
Two layers of output can be calculated by substituting the output from the first layer into the input of the second layer:
This method can be continued to calculate the output of a network with an arbitrary number of layers. Notice that as the number of layers increases, so does the complexity of this calculation. Sufficiently large neural networks can quickly become too complex for direct mathematical analysis. | https://en.wikibooks.org/wiki/Artificial_Neural_Networks/Feed-Forward_Networks |
4 | The sun flings out solar wind particles in much the same manner as a garden sprinkler throws out water droplets.
The artist's drawing of the solar wind flow was provided courtesy of NASA.
The Spiral of the IMF
The solar wind is formed as the Sun's top layer blows off into space, carrying magnetic fields still attached to the Sun. Gusts form in the solar wind associated with violent events on the Sun.
Particles appear to flow into space as if they are spiraling out from the Sun, as shown in this figure. The figure shows what is referred to as the "spiral angle" of the IMF (interplanetary magnetic field).
For a planet to be affected by a blob of material being ejected by the Sun, the planet must be in the path of the blob.
Shop Windows to the Universe Science Store!
Our online store
on science education, classroom activities in The Earth Scientist
specimens, and educational games
You might also be interested in:
The force of magnetism causes material to point along the direction the magnetic force points. As shown in the diagram to the left, the force of magnetism is illustrated by lines, which represent the force....more
For a planet to be affected by a blob of material being ejected by the sun, the planet must be in the path of the blob, as shown in this picture. The Earth and its magnetosphere are shown in the bottom...more
AU stands for Astronomical Units. It is a useful way to measure the distances in interplanetary space. It is the distance between the Earth and the Sun, which is about 93 million miles. For reference,...more
The solar wind is formed as the Sun's top layer blows off into space, carrying magnetic fields still attached to the Sun. Gusts form in the solar wind associated with violent events on the Sun. Particles...more
The aurora we are most familiar with is the polar aurora. This is what people are talking about when they say the northern or southern lights. But there are other less-known aurora, such as SAR arcs....more
This figure shows the effect of the aurora on the atmosphere. When FAC's enter the atmosphere and create the aurora, they heat the atmosphere suddenly and abruptly. This creates an impulse which travels...more
This picture shows the flowing of particles into and out of the auroral zone, as Field-Aligned currents (FAC's) take at short-cut through the atmosphere. Some of the particles entering the auroral zone...more | https://www.windows2universe.org/glossary/IMF_spiral.html |
4 | |Read below for Marceau's amazing story!|
When I ask students to name someone famous and the first reply I hear is "Kim Kardashian," I die just a little bit inside. Students don't seem to have an understanding of, or appreciation for, the lives of great men and women who changed the course of history.
But biography picture books can help to remedy that.
I finally asked, "And did you include why that quote was so important, considering the person who said it?"
Her reply: "Well, I had heard of him, but I didn't really know who he was."
Regardless of what some might have us believe (the PARCC assessment comes to mind), historical context does, in fact, matter when examining any piece of text, and history is the product of those who made it.
Students therefore need knowledge of heroes of history.
The Tween Tribune article "It's Even Too Cold for Polar Bears!", for example, was summed up as follows:
After some independent practice with longer articles (requiring even greater ability to discern important facts), we were ready to move on to trade books.
You may want to follow along on the assignment guidesheet which you're welcome to download in pdf (or Word) and be sure to grab the blank sheet as well (also available as a Word doc). You'll notice that the instructional steps below differ somewhat from those given to students for their own work.
In their notebooks, students jotted down a list of the 5Ws and 1H (Who, What, Where, When, Why, How) and were asked to listen for those facts as I read the book aloud. I read the majority of the book, stopping to monitor understanding and also to ask if any of our facts had been discovered.
By story's end we had
What: painted pictures that weren't beautiful
Where: New York City
When: early 1900s
Why: to show emotions and power
How: showing scenes of everyday city life
Students knew that this was coming. What textual evidence backed up what we just stated? We found several sentences which might work, and finally settled on just a snippet of one quote, which we placed into a sentence that included both the author and book:
But then I asked, "So what? Why did that matter?" And here's where students begin to see the light. Those people from history who changed the way others think, believe, or act tend to be those worth remembering. In the case of George Bellows, he and other students of Robert Henri went against the traditional belief that the artist's role was to paint what was beautiful.
This led us to construct an opposing viewpoint statement to precede the summary sentence we had already drafted:
Armed with this model, students jotted down the sentence order in their notebooks as a quick reference:
II. 5Ws and 1H
III. Textual Evidence
I was surprised by students' success with the process. While some, as expected, followed the Bellows model precisely, simply swapping out details as needed, others departed from the model. A couple of students tried switching sentence orders when writing summaries of their second books, while others tried different grammatical structures while maintaining the sentence order we had established.
One student, not thrilled when handed Marcel Marceau: Master of Mime, was amazed to learn that this entertainer played a major role in the French Resistance, and led many Jewish children to safety. His paragraph, which he knew fell far short of paying homage to this unsung hero, reads:
Most surprising to many students was how much they enjoyed reading about people they had never even heard of (many students had already made plans for the next book they wanted to read). The skepticism I witnessed on the first day when distributing books was replaced with enthusiasm by day two of the assignment. And since then, students have been asking to do the assignment again, and many have naturally been begging to read biographies of their own choosing.
In my next post I'll share some possible extension activities, as well as some of the more popular titles which students enjoyed. | http://teachwithpicturebooks.blogspot.com/2014/02/heroes-of-history.html |
4.25 | Voice or voicing is a term used in phonetics and phonology to characterize speech sounds, with sounds described as either voiceless (unvoiced) or voiced.
Voiced and Voiceless Consonants. One problem that many students face in pronunciation is whether a consonant is voiced or voiceless. This guide should help ...
In phonetics, a voiced consonant is a consonant which is pronounced with the vibration of the vocal cords. For example, the sound [z] is a voiced consonant, while the ...
Consonants: voiced and unvoiced . Many consonant sounds come in pairs. For example, P and B are produced in the same place in the mouth with the tongue in the same ...
C o n s o n a n t s. The following table displays and describes the different IPA consonants. Click on a symbol to hear an audio clip. (Note: The audio clips may not ...
This discovery activity can be used to help learners notice the difference between voiced and unvoiced consonants. Begin by asking learners what noise a bee makes.
(Redirected from Aspirated voiced consonant) Jump to: navigation, search. This article needs additional citations for verification. Please ...
Voiced definition, having a voice of a specified kind (usually used in combination): ... in English (b) is a voiced consonant Compare voiceless. voice / vɔɪs /
PHONOLOGY: CONSONANTS. All consonants may be classified as either voiced or voiceless. In articulating a voiced consonant, the vocal cords are vibrating.
301 Moved Permanently. nginx | https://www.search.com/reference/Voiced_consonant |
4.1875 | |By: Bob Preville|
Every electrical technician knows the difference between DC (Direct Current) and AC (Alternating Current). Every electrical technician also realizes the importance of taking accurate current measurements to protect conductors from exceeding their insulators' ability to withstand heat or assuring devices under power work properly. However, does every electrical technician realize that electrical current measurements aren't always what they appear to be?
Direct Current (DC) is straightforward. When we use a multimeter to measure direct current, it is what it is. However, the plot thickens when we are dealing with Alternating Current (AC). AC current travels back and forth down a conductor and can best be described in graphical format. The most common graphical description of AC current is a sine wave. Because the amplitude of the sine wave continuously changes over the wave period (one complete cycle), at any given point in time, a current measurement would not be the same. Therefore, how do we accurately measure AC Current?
One method to measure AC current would be take current measurements at increments across one complete cycle and average them together. This would give us an average value of the current. If the current is a perfect sine wave, mathematically, the average value is always 0.636 times the value of the peak amplitude.
Another method to measure current is based on the current's ability to perform work when applied to a resistive load. The laws of physics tell us that when current passes through a resistive load, it dissipates energy in the form of heat, mechanical motion, radiation or other forms of energy. If the resistive load is a heating element and the resistive load stays constant, then the laws of physics tell us that the heat produced is directly proportionate to the current passing through the load. Therefore, if we measure the heat, we will know the current.
Mathematically, the relationship between heat and current is such that the heat produced is proportional to the square of the current applied to a resistance.
(Power or Heat) = (Current) ^2 * (Resistance)
If the current is continuously changing, as in AC current, the heat produced is proportional to the average (or mean) of the square of the current applied to a resistance:
(Power or Heat) = Average [ (Current) ^2 * (Resistance) ]
Using algebra, the above formula can be rewritten to read:
Current = Square Root [ (Power or Heat) / (Resistance) ]
AND this is called the Root Mean Square Current or RMS Current.
For AC currents that are graphically represented by a sine wave, the RMS current will always be 0.707 times the peak current. With that said, we can calculate current by multiplying peak measurements by 0.707 if the current is a perfect sine wave. However, perfect sine waves are rare in most commercial and industrial applications. This is because resistive loads in commercial applications are not linear which results in unpredictable or variable current requirements.
In order to get a True RMS measurement, we can measure the heat dissipated by a constant resistive load and perform the above calculations. The result is a True RMS measurement.
Now that we got all the technical discussion out of the way, which is the best method to calculate current? Should we 1.) measure a current average 2.) multiply current peaks by 0.707 to get an RMS current, or 3.) measure the heat from a resistor and calculate a True RMS current value?
Although Global Test Supply sells multimeters that can calculate current using any of the above methods, the most accurate way to calculate current in my opinion is a True RMS method. Average current values often are 40% less than True RMS values and that could mean the difference between blown circuit breakers, malfunctioning motors, or worst case, potential fire hazards. True RMS multimeters only cost about 20-30% more than the alternative. How much is an accurate current reading worth to you?
Streetdirectory.com, ranked Number 1 Travel Guide in Singapore provides a variety of customized Singapore street directory, Asia hotels, Singapore Images, Singapore real estate, Search for Singapore Private Limited Companies, Singapore Wine and Dine Guide, Bus Guide and S.E.A Travel Guide. Our travel guide includes Singapore Travel Guide, Bali Travel Guide, Bali Maps, Jakarta Travel Guide, KL Travel Guide, Malaysia Guide, Johor Guide, Malacca Guide and is widely used by travelers, expats and tourists around the world. Singapore Jobs | http://www.streetdirectory.com/travel_guide/15059/gadgets/how_accurate_is_your_multimeter_and_what_is_true_rms.html |
4 | Nordic Bronze Age
|History of Scandinavia|
The Nordic Bronze Age (also Northern Bronze Age) is a period of Scandinavian prehistory from c. 1700–500 BC. The Bronze Age culture of this era succeeded the Late Neolithic Stone Age culture and was followed by the Pre-Roman Iron Age. The archaeological legacy of the Nordic Bronze Age culture is rich, but the ethnic and linguistic affinities of it are unknown, in the absence of written sources. Some scholars also include sites in what is now northern Germany, Pomerania and Estonia in the Baltic region, as part of its cultural sphere.
Even though Scandinavians joined the European Bronze Age cultures fairly late through trade, Scandinavian sites presents a rich and well-preserved legacy of bronze and gold objects. These valuable metals were all imported, primarily from Central Europe, but they were often crafted locally and the craftsmanship and metallurgy of the Nordic Bronze Age was of a high standard. The archaeological legacy also comprise locally crafted wool and wooden objects and there are many tumuli and rock carving sites from this period, but no written language existed in the Nordic countries during the Bronze Age. The rock carvings have been dated through comparison with depicted artifacts, for example bronze axes and swords. There are also numerous Nordic Stone Age rock carvings, those of northern Scandinavia mostly portray elk.
Thousands of rock carvings from this period depict ships, and the large stone burial monuments known as stone ships, suggest that ships and seafaring played an important role in the culture at large. The depicted ships, most likely represents sewn plank built canoes used for warfare, fishing and trade. These ship types may have their origin as far back as the neolithic period and they continue into the Pre-Roman Iron Age, as exemplified by the Hjortspring boat.
Oscar Montelius, who coined the term used for the period, divided it into six distinct sub-periods in his piece Om tidsbestämning inom bronsåldern med särskilt avseende på Skandinavien ("On Bronze Age dating with particular focus on Scandinavia") published in 1885, which is still in wide use. His absolute chronology has held up well against radiocarbon dating, with the exception that the period's start is closer to 1700 BC than 1800 BC, as Montelius suggested. For Central Europe a different system developed by Paul Reinecke is commonly used, as each area has its own artifact types and archaeological periods.
A broader subdivision is the Early Bronze Age, between 1700 BC and 1100 BC, and the Late Bronze Age, 1100 BC to 550 BC. These divisions and periods are followed by the Pre-Roman Iron Age.
The Nordic Bronze Age was characterized first by a warm climate that began with a climate change around 2700 BC (comparable to that of present-day central Germany and northern France). The warm climate permitted a relatively dense population and good farming; for example, grapes were grown in Scandinavia at this time. A wetter, colder climate prevailed after a minor change in climate between 850 BC and 760 BC, and a more radical one around 650 BC.
Religion and cult
|Nordic Bronze Age cult|
There is no coherent knowledge about the Nordic Bronze Age religion; its pantheon, world view and how it was practised. Written sources are lacking, but archaeological finds draws a vague and fragmented picture of the religious practices and the nature of the religion of this period. Only some possible sects and only certain possible tribes are known. Some of the best clues come from tumuli, elaborate artifacts, votive offerings and rock carvings scattered across Northern Europe.
Many finds indicates a strong sun worshipping cult in the Nordic Bronze Age and various animals have been associated with the suns movement across the sky, including horses, birds, snakes and marine creatures (see also Sól). A female or mother goddess is also believed to have been widely worshipped (see Nerthus).[clarification needed] Hieros gamos rites may have been common and there have been several finds of fertility symbols. A pair of twin gods are believed to have been worshipped, and is reflected in a duality in all things sacred: where sacrificial artifacts have been buried they are often found in pairs. Sacrifices (animals, weapons, jewellery and humans) often had a strong connection to bodies of water. Boglands, ponds, streams or lakes were often used as ceremonial and holy places for sacrifices and many artifacts have been found in such locations. Ritual instruments such as bronze lurs have been uncovered, especially in the region of Denmark and western Sweden. Lur horns are also depicted in several rock carvings and are believed to have been used in ceremonies.
Bronze Age rock carvings may contain some of the earliest depictions of well-known gods from the later Norse mythology. A common figure in these rock carvings is that of a male figure carrying what appears to be an axe or hammer. This may have been an early representation of Thor. Other male figures are shown holding a spear. Whether this is a representation of Odin or Týr is not known. It is possible the figure may have been a representation of Tyr, as one example of a Bronze Age rock carving appears to show a figure missing a hand. A figure holding a bow may be an early representation of Ullr. Or it is possible that these figures were not gods at all, but men brandishing the weapons of their culture.
Remnants of the Bronze Age religion and mythology are believed to exist in Germanic mythology and Norse mythology; e.g., Skinfaxi and Hrímfaxi and Nerthus, and it is believed to itself be descended from an older Indo-European prototype.
|Nordic Bronze Age culture|
|Wikimedia Commons has media related to Nordic Bronze Age.|
- Bronze Age Europe
- Bronze Age sword
- Egtved Girl
- The King's Grave
- Stone ships
- Pomeranian culture
- The carvings have been painted in recent times. It is unknown whether they were painted originally. Composite image. Nordic Bronze Age.
- Ling 2008. Elevated Rock Art. GOTARC Serie B. Gothenburg Archaeological Thesis 49. Department of Archaeology and Ancient History, University of Gothenburg, Goumlteborg, 2008. ISBN 978-91-85245-34-5.
- Dabrowski, J. (1989) Nordische Kreis un Kulturen Polnischer Gebiete. Die Bronzezeit im Ostseegebiet. Ein Rapport der Kgl. Schwedischen Akademie der Literatur Geschichte und Alter unt Altertumsforschung über das Julita-Symposium 1986. Ed Ambrosiani, B. Kungl. Vitterhets Historie och Antikvitets Akademien. Konferenser 22. Stockholm.
- Davidson, H. R. Ellis and Gelling, Peter: The Chariot of the Sun and other Rites and Symbols of the Northern European Bronze Age.
- K. Demakopoulou (ed.), Gods and Heroes of the European Bronze Age, published on the occasion of the exhibition "Gods and Heroes of the Bronze Age. Europe at the Time of Ulysses", from December 19, 1998, to April 5, 1999, at the National Museum of Denmark, Copenhagen, London (1999), ISBN 0-500-01915-0.
- Demougeot, E. La formation de l'Europe et les invasions barbares, Paris: Editions Montaigne, 1969-1974.
- Kaliff, Anders. 2001. Gothic Connections. Contacts between eastern Scandinavia and the southern Baltic coast 1000 BC – 500 AD.
- Montelius, Oscar, 1885. Om tidsbestämning inom bronsåldern med särskilt avseende på Skandinavien.
- Musset, L. Les invasions: les vagues germanique, Paris: Presses universitaires de France, 1965. | https://en.wikipedia.org/wiki/Nordic_Bronze_Age |
4.3125 | Trig without Tears Part 4:
Trig without Tears Part 4:
Summary: The six trig functions were originally defined for acute angles in triangles, but now we define them for any angle (or any number). If you want any of the six function values for an angle that’s not between 0 and 90° (π/2), you just find the function value for the reference angle that is within that interval, and then possibly apply a minus sign.
So far we have defined the six trig functions as ratios of sides of a right triangle. In a right triangle, the other two angles must be less than 90°, as suggested by the picture at left.
Suppose you draw the triangle in a circle this way, with angle A at the origin and the circle’s radius equal to the hypotenuse of the triangle. The hypotenuse ends at the point on the circle with coordinates (x,y), where x and y are the lengths of the two legs of the triangle. Then using the standard definitions of the trig functions, you have
sin A = opposite/hypotenuse = y/r
cos A = adjacent/hypotenuse = x/r
This is the key to extending the trig functions to any angle.
The trig functions had their roots in measuring sides of triangles, and chords of a circle (which is practically the same thing). If we think about an angle in a circle, we can extend the trig functions to work for any angle.
In the diagram, the general angle A is drawn in standard position, just as we did above for an acute angle. Just as before, its vertex is at the origin and its initial side lies along the positive x axis. The point where the terminal side of the angle cuts the circle is labeled (x,y).
(This particular angle happens to be between 90° and 180° (π/2 and π), and we say it lies in Quadrant II. But you could draw a similar diagram for any angle, even a negative angle or one >360°.)
Now let’s define sine and cosine of angle A, in terms of the coordinates (x,y) and the radius r of the circle:
(21) sin A = y/r, cos A = x/r
This is nothing new. As you saw above when A was in Quadrant I, this is exactly the definition you already know from equation 1: sin A = opposite/hypotenuse, cos A = adjacent/hypotenuse. We’re just extending it to work for any angle.
The other function definitions don’t change at all. From equation 3 we still have
tan A = sin A / cos A
which means that
tan A = y/x
and the other three functions are still defined as reciprocals (equation 5).
Once again, there’s nothing new here: we’ve just extended the original definitions to a larger domain.
So why go through this? Well, for openers, not every triangle is an acute triangle. Some have an angle greater than 90°. Even in down-to-earth physical triangles, you’ll have to be concerned with functions of angles greater than 90°.
Beyond that, it turns out that all kinds of physical processes vary in terms of sines and cosines as functions of time: height of the tide; length of the day over the course of a year; vibrations of a spring, or of atoms, or of electrons in atoms; voltage and current in an AC circuit; pressure of sound waves, Nearly every periodic process can be described in terms of sines and cosines.
And that leads to a subtle shift of emphasis. You started out thinking of trig functions of angles, but really the domain of trig functions is all real numbers, just like most other functions. How can this be? Well, when you think of an “angle” of so-and-so many radians, actually that’s just a pure number. For instance, 30°=π/6. We customarily say “radians” just to distinguish from degrees, but really π/6 is a pure number. When you take sin(π/6), you’re actually evaluating the function sin(x) at x = π/6 (about 0.52), even though traditionally you’re taught to think of π/6 as an angle.
We won’t get too far into that in these pages, but here’s an example. If the average water depth is 8 ft in a certain harbor, and the tide varies by ±3 ft, then the height at time t is given by a function that resembles y = 8 + 3 cos(0.52t). (It’s actually more complicated, because high tides don’t come at the same time every day, but that’s the idea.)
Coming back from philosophy to the nitty-gritty of computation, how do we find the value of a function when the angle (or number) is outside the range [0;90°] (which is 0 to π/2)? The key is to define a reference angle.
Here’s the same picture of angle A again, but with its reference angle added. With angle A in standard position, the reference angle is the acute angle between the terminal side of A and the positive or negative x axis. In this case, angle A is in Q II, the reference angle is 180°−A (π−A). Why? Because the two angles together equal 180° (π).
What good does the reference angle do you? Simply this: the six function values for any angle equal the function values for its reference angle, give or take a minus sign.
That’s an incredibly powerful statement, if you think about it. In the drawing, A is about 150° and the reference angle is therefore about 30°. Let’s say they’re exactly 150° and 30°, just for discussion. Then sine, cosine, tangent, cotangent, secant, and cosecant of 150° are equal to those same functions of 30°, give or take a minus sign.
What’s this “give or take” business? That’s what the next section is about.
Remember the extended definitions from equation 21:
sin A = y/r, cos A = x/r
The radius r is always taken as positive, and therefore the signs of sine and cosine are the same as the signs of y and x. But you know which quadrants have positive or negative y and x, so you know for which angles (or numbers) the sine and cosine are positive or negative. And since the other functions are defined in terms of the sine and cosine, you also know where they are positive or negative.
Spend a few minutes thinking about it, and draw some sketches. For instance, is cos 300° positive or negative? Answer: 300° is in Q IV, which is in the right-hand half of the circle. Therefore x is positive, and the cosine must be positive as well. The reference angle is 60° (draw it!), so cos 300° equals cos 60° and not −cos 60°.
You can check your thinking against the chart that follows. Whatever you do, don’t memorize the chart! Its purpose is to show you how to reason out the signs of the function values whenever you need them, not to make you waste storage space in your brain.
|Signs of Function Values|
0 to 90°
0 to π/2
90 to 180°
π/2 to π
180 to 270°
π to 3π/2
270 to 360°
3π/2 to 2π
What about other angles? Well, 420° = 360°+60°, and therefore 420° ends in the same position in the circle as 60°—it’s just going once around the circle and then an additional 60°. So 420° is in Q I, just like 60°.
You can analyze negative angles the same way. Take −45°. That occupies the same place on the circle as +315° (360°−45°). −45° is in Q IV.
As you’ve seen, for any function you get the numeric value by considering the reference angle and the positive or negative sign by looking where the angle is.
Example: What’s cos 240°? Solution: Draw the angle and see that the reference angle is 60°; remember that the reference angle always goes to the x axis, even if the y axis is closer. cos 60° = ½, and therefore cos 240° will be ½, give or take a minus sign. The angle is in Q III, where x is negative, and therefore cos 240° is negative. Answer: cos 240° = −½.
Example: What’s tan(−225°)? Solution: Draw the angle and find the reference angle of 45°. tan 45° = 1. But −225° is in Q II, where x is negative and y is positive; therefore y/x is negative. Answer: tan(−225°) = −1.
The techniques we worked out above can be generalized into a set of identities. For instance, if two angles are supplements then you can write one as A and the other as 180°−A. You know that one will be in Q I and the other in Q II, and you also know that one will be the reference angle of the other. Therefore you know at once that the sines of the two angles will be equal, and the cosines of the two will be numerically equal but have opposite signs.
This diagram may help:
Here you see a unit circle (r = 1) with four identical triangles. Their angles A are at the origin, arranged so that they’re mirror images of each other, and their hypotenuses form radii of the unit circle. Look at the triangle in Quadrant I. Since its hypotenuse is 1, its other two sides are cos A and sin A.
The other three triangles are the same size as the first so their sides must be the same length as the sides of the first triangle. But you can also look at the other three radii as belonging to angles 180°−A in Quadrant II, 180°+A in Quadrant III, and −A or 360°−A in Quadrant IV. All the others have a reference angle equal to A. From the symmetry, you can immediately see things like sin(180°+A) = −sin A and cos(−A) = cos A.
The relations are summarized below. Don’t memorize them! Just draw a diagram whenever you need them—it’s easiest if you use a hypotenuse of 1. Soon you’ll find that you can quickly visualize the triangles in your mind and you won’t even need to draw a diagram. The identities for tangent are easy to derive: just divide sine by cosine as usual.
|sin(180°−A) = sin A
sin(π−A) = sin A
|cos(180°−A) = −cos A
cos(π−A) = −cos A
|tan(180°−A) = −tan A
tan(π−A) = −tan A
|sin(180°+A) = −sin A
sin(π+A) = −sin A
|cos(180°+A) = −cos A
cos(π+A) = −cos A
|tan(180°+A) = tan A
tan(π+A) = tan A
|sin(−A) = −sin A||cos(−A) = cos A||tan(−A) = −tan A|
The formulas for negative angles of the other functions drop right out of the definitions equation 3 and equation 5, since you already know the formulas equation 22 for sine and cosine of negative angles. For instance, csc(−A) = 1 / sin(−A) = 1 / −sin A = −1 / sin A = −csc A.
(23) cot(−A) = −cot A
sec(−A) = sec A
csc(−A) = −csc A
You can reason out things like whether sec(180°−A) equals sec A or −sec A: just apply the definition and use what you already know about cos(180°−A).
You should be able to see that 360° brings you all the way around the circle. That means that an angle of 360°+A or 2π+A is the same as angle A. Therefore the function values are unchanged when you add 360° or a multiple of 360° (or 2π or a multiple) to the angle. Also, if you move in the opposite direction for angle A, that’s the same angle as 360°−A or 2π−A, so the function values of −A and 360°−A (or 2π−A) are the same.
For this reason we say that sine and cosine are periodic functions with a period of 360° or 2π. Their values repeat over and over again. Of course secant and cosecant, being reciprocals of cosine and sine, must have the same period.
What about tangent and cotangent? They are periodic too, but their period is 180° or π: they repeat twice as fast as the others. You can see this from equation 22: tan(180°+A) = tan A says that the function values repeat every 180°.
next: 5/Solving Triangles
Updates and new info: http://BrownMath.com/twt/ | http://brownmath.com/twt/refangle.htm |
4.21875 | Arrays Teacher Resources
Find Arrays educational ideas and activities
Showing 61 - 80 of 1,819 resources
Make Multiplication and Division Facts From Arrays #1
How can an array be represented by a number sentence? Young mathematicians work to answer this question, writing two division and two multiplication equations for each given array. With answers provided, this is a quick practice...
3rd - 5th Math
Find Area Using Multiplication in Real World Problems
Learning math provides people with the tools to solve life's everyday problems. Show young learners how to apply their understanding of area in real-world contexts with the second video in this series. Multiple story problems are...
3 mins 2nd - 4th Math CCSS: Designed | http://www.lessonplanet.com/lesson-plans/arrays/4 |
4.03125 | Definition of Iron
Iron: An essential mineral. Iron is necessary for the transport of oxygen (via hemoglobin in red blood cells) and for oxidation by cells (via cytochrome). Deficiency of iron is a common cause of anemia. Food sources of iron include meat, poultry, eggs, vegetables and cereals (especially those fortified with iron). According to the National Academy of Sciences, the Recommended Dietary Allowances of iron for women ages 19 to 50 is 18 milligrams per day and for men ages 19+, 8 milligrams per day. Iron overload can damage the heart, liver, gonads and other organs. Iron overload is a particular risk in people who may have certain genetic conditions (hemochromatosis) sometimes without knowing it and also in people receiving recurrent blood transfusions. Iron supplements meant for adults (such as pregnant women) are a major cause of poisoning in children.Source: MedTerms™ Medical Dictionary
Last Editorial Review: 10/30/2013
Medical Dictionary Definitions A - Z
Search Medical Dictionary
Nutrition and Healthy Eating Resources
- Could I Have Binge Eating Disorder?
- 11 No-Alcohol Drinks for Diabetes
- Are We Close to a Cure for Cancer?
- Early Care for Your Premature Baby
- What to Eat When You Have Cancer
- When to Take More Pain Medication | http://www.emedicinehealth.com/script/main/art.asp?articlekey=4046 |
4.09375 | The upper class in modern societies is the social class composed of the wealthiest members of society, who also wield the greatest political power. According to this view, the upper class is generally contained within the wealthiest 1-2% of the population, and is distinguished by immense wealth (in the form of estates) which is passed on from generation to generation. . Out of the American population one percent of the wealthiest population is responsible for thirty-four percent of the cumulative national wealth.
Because the upper classes of a society may no longer rule the society in which they are living they are often referred to as the old upper classes and they are often culturally distinct from the newly rich middle classes that tend to dominate public life in modern social democracies. According to the latter view held by the traditional upper classes no amount of individual wealth or fame would make a person from an undistinguished background into a member of the upper class as one must be born into a family of that class and raised in a particular manner so as to understand and share upper class values, traditions, and cultural norms. The term is often used in conjunction with terms like "upper-middle class," "middle class," and "working class" as part of a model of social stratification.
Historically in some cultures, members of an upper class often did not have to work for a living, as they were supported by earned or inherited investments (often real estate), although members of the upper class may have had less actual money than merchants. Upper- class status commonly derived from the social position of one's family and not from one's own achievements or wealth. Much of the population that composed the upper class consisted of aristocrats, ruling families, titled people, and religious hierarchs. These people were usually born into their status and historically there was not much movement across class boundaries. This is to say that it was much harder for an individual to move up in class simply because of the structure of society.
In many countries the term "upper class" was intimately associated with hereditary land ownership. Political power was often in the hands of the landowners in many pre-industrial societies despite there being no legal barriers to land ownership for other social classes. Upper-class landowners in Europe were often also members of the titled nobility, though not necessarily: the prevalence of titles of nobility varied widely from country to country. Some upper classes were almost entirely untitled, for example, the Szlachta of the Polish-Lithuanian Commonwealth.
In England, Wales, Scotland, and Ireland, the "upper class" traditionally comprised the landed gentry and the aristocracy of noble families with hereditary titles. The vast majority of post-medieval aristocratic families originated in the merchant class and were ennobled between the 14th and 19th centuries while intermarrying with the old nobility and gentry. Since the Second World War, the term has come to encompass rich and powerful members of the managerial and professional classes as well. Members of the English gentry organized the colonization of Virginia and New England and ruled these colonies for generations forming the foundation of the American upper class or East Coast Elite.
See main article: American upper class. In the United States the upper class, as distinguished from the rich, is often considered to consist of those families that have for many generations enjoyed top social status based on their leadership in society and their distinctive culture derived from their Upper class ancestors in the colonial gentry. In this respect the US differs little from countries such as the UK where membership of the 'upper class' is also dependent on other factors. In the United Kingdom it has been said that class is relative to where you have come from, similar to the United States where class is more defined by who as opposed to how much; that is, in the UK and the US people are born into the upper class. The American upper class is estimated to constitute less than 1% of the population. By self-identification, according to this 2001-2012 Gallup Poll data, 98% of Americans identify with the 5 other class terms used, 48-50% identifying as "middle class."
The main distinguishing feature of the upper class is its ability to derive enormous incomes from wealth through techniques such as money management and investing, rather than engaging in wage-labor or salaried employment. Successful entrepreneurs, CEOs, politicians, investment bankers, venture capitalists, stockbrokers, heirs to fortunes, some lawyers, top flight physicians, and celebrities are considered members of this class by contemporary sociologists, such as James Henslin or Dennis Gilbert. There may be prestige differences between different upper-class households. An A-list actor, for example, might not be accorded as much prestige as a former U.S. President, yet all members of this class are so influential and wealthy as to be considered members of the upper class. At the pinnacle of U.S wealth, 2004 saw a dramatic increase in the numbers of billionaires. According to Forbes Magazine, there are now 374 U.S. billionaires. The growth in billionaires took a dramatic leap since the early 1980s, when the average networth of the individuals on the Forbes 400 list was $400 million. Today, the average net worth is $2.8 billion. Wal-Mart Walton family now has 771,287 times more than the median U.S household. (Collins and Yeskel 322)
Since the 1970s income inequality in the United States has been increasing, with the top 1% experiencing significantly larger gains in income than the rest of society. Alan Greenspan, former chair of the Federal Reserve, sees it as a problem for society, calling it a "very disturbing trend."
According to the book Who Rules America?, by William Domhoff, the distribution of wealth in America is the primary highlight of the influence of the upper class. The top 1% of Americans own around 34% of the wealth in the U.S. while the bottom 80% own only approximately 16% of the wealth. This large disparity displays the unequal distribution of wealth in America in absolute terms.
. Arnold Toynbee. Study of History: Abridgement of Vols I-X in one volume. Oxford University Press. 1960. | http://everything.explained.today/Upper_class/ |
4.15625 | |Search all my sites|
ALL ASTRONOMY IMAGES
THE SEASONS arise because the Earth (white, green & blue striped sphere) is tilted on its axis (yellow pole through Earth) and this tilt is maintained throughout the Earth's orbit (shown in purple) around the Sun (yellow sphere in the centre). Consequently, the northern and southern hemispheres receive different amounts of sunlight throughout the year. At the start of the animation (which is viewed from slightly north of the plane of the Earth's orbit around the Sun) the Earth is tilted so that the northern hemisphere receives most light (you can see the letter "N" above the North Pole inclined towards the sun). This position corresponds to the northern mid-summer or summer solstice and the southern mid-winter. At this point, the northern hemisphere experiences its longest day and the southern hemisphere its shortest day. As the animation progresses, the Earth moves in an anti-clockwise direction (as viewed from this vantage) to the equinox at middle front of the orbit. The Equinox is the point of equal day and night (from the Latin for equal night). At this point, the tilt of the Earth is directed at right angles to the sun. The Earth continues around to the right of the picture where the tilt is again maximal with respect to the sun. This time, however, the southern hemisphere is maximally pointed towards the sun (you can see the letter "S" below the South Pole is now inclined towards the sun). This corresponds to the southern mid-summer and northern mid-winter (solstice). At this point the southern hemisphere experiences its longest day and the northern hemisphere its shortest day. As the Earth progresses in its orbit around the sun, it passes through another equinox before completing the circuit at the northern mid-summer. This oscillating level of sunlight is heavily imprinted on nature as temperature and daylength vary.
In this movie, the Earth is divided into coloured bands: | http://www.rkm.com.au/ANIMATIONS/animation-seasons.html |
4.1875 | A program is like a recipe. It contains a list of ingredients (called variables) and a list of directions (called statements) that tell the computer what to do with the variables. The variables can represent numeric data, text, or graphical images.
There are many programming languages -- C, C++, Pascal, BASIC, FORTRAN, COBOL, and LISP are just a few. These are all high-level languages. One can also write programs in low-level languages called assembly languages, although this is more difficult. Low-level languages are closer to the languageused by a computer, while high-level languages are closer to human languages.
When you buy software, you normally buy an executable version of a program. This means that the program is already in machine language -- it has already been compiledand assembled and is ready to execute.
(v) To write programs.
Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now.
Webopedia's student apps roundup will help you to better organize your class schedule and stay on top of assignments and homework. Read More »20 Ways to Shorten a URL
If you need to shorten a long URL try this list of 20 free online redirection services. Read More »Top 10 Tech Terms of 2015
The most popular Webopedia definitions of 2015. Read More »
This Webopedia study guide describes the different parts of a computer system and their relations. Read More »Network Fundamentals Study Guide
Networking fundamentals teaches the building blocks of modern network design. Learn different types of networks, concepts, architecture and... Read More »The Five Generations of Computers
Learn about each of the five generations of computers and major technology developments that have led to the current devices that we use today. Read More » | http://www.webopedia.com/TERM/P/program.html |
4.1875 | Prions, abnormally folded proteins associated with several bizarre human diseases, may hold the key to a major mystery in evolution: how survival skills that require multiple genetic changes arise all at once when each genetic change by itself would be unsuccessful and even harmful.
In a study in the September 28, 2000, issue of Nature researchers at the Howard Hughes Institute at the University of Chicago describe a prion-dependent mechanism that seems perfectly suited to solving this dilemma, at least for yeast. It allows yeast to stockpile an arsenal of genetic variation and then release it to express a host of novel characteristics, including the ability to grow well in altered environments.
"We found that a heritable genetic element based on protein folding, not encoded in DNA or RNA, allows yeast to acquire many silent changes in their genome and suddenly reveal them," said Susan Lindquist, Ph.D., professor of molecular genetics and cell biology at the University of Chicago, Howard Hughes Investigator and principal author of the study.
There are thousands of proteins in every cell and each one has to fold into just the right shape in order to function. In prion diseases, which include mad cow disease and Creutzfeldt-Jakob disease, a normal cell protein, PrP, assumes an abnormal shape.
Mis-folded proteins are usually just degraded, but the prion protein causes other PrP proteins to mis-fold, too, creating a protein-folding chain reaction. Thus, they act as infectious agents. As more and more of the proteins fold into the prion shape, they form inactive aggregates which lead to dysfunction and disease.
A few years ago geneticists made the startling discovery that yeast, the organism found in bread and beer, has prions, too. Yeast prions are unrelated to the mammalian prions, and don’t harm humans or yeast. They do, however, have the unusual property of mis-folding in the same peculiar way and spreading their change in shape from one protein to another. Mother cells pass these proteins to their daughters, so the change, once it occurs, is inherited from generation to generation.
Because yeast prions act much like mammalian prions and are easier to study, scientists hope they will offer clues about how these mis-folding chain reactions get started and how they might be stopped.
But the real puzzle is why these things exist in yeast cells in the first place. University of Chicago researchers appear to have found the answer, and it has broad and unexpected implications: the yeast prion seems to play an adaptive role and may greatly influence evolutionary processes.
The prion protein they studied is called Sup35. It normally ensures that yeast faithfully translate the genetic code. Specifically, Sup35 recognizes special signals that tell the entire protein production machinery to stop when it is supposed to stop.
Sup35 doesn't function in its prion state. As a result, the protein production machinery runs right through the "stop signs." This means that usually silent regions of the genetic code are suddenly expressed. Because these regions are normally not expressed, they don't face selective pressures that prevent mutations from accumulating. The prion therefore uncovers, all at once, a wealth of previously hidden genetic mutations and creates a completely new set of growth properties. Suddenly cells change the kind of food they eat, change their resistance to antibiotics and even grow colonies with completely different shapes.
In some cases the prion may simply cause the protein production machinery to read through the "stop sign" at the end of a normal gene. This would create a protein whose function is altered by the addition of a new tail.
In other cases the cell machinery may produce a completely new protein from a mutated gene that is not ordinarily translated because it contains a stop signal.
The key to its effect is the stable inheritance of the prion state and the normal state. A spontaneous switch between the two states occurs approximately once in a million generations. Because a yeast colony produces a new generation every two hours, in a short time a colony will produce some members that have switched their state.
"It’s an ‘all or nothing’ switch, with the changes immediately inherited by all the progeny," said Lindquist. "But because the cell maintains the ability to switch back, the prion switch allows cells to occupy a new niche without losing their capacity to occupy the old."
The researchers exposed seven distinct genetic strains of yeast in their prion and non-prion states to 150 different growth conditions. The prion-positive state had a substantial effect on the growth of the yeast in nearly half of the conditions tested. In more than 25 percent of these cases its effects were positive. The incredible diversity of the advantages conveyed by the prions indicated that each strain had different novel genes turned on in its prion-positive state.
This prion switch is conserved in yeast across very distantly related genetic strains. Though the switch may have evolved as an accidental consequence of a shape change in an unimportant functioning part of the Sup35, its conservation suggests an evolutionary advantage.
"It may be that the prion switch offers yeast a way to respond to commonly fluctuating environments," said Lindquist. "During its evolution S. cerevisiae (brewers’ yeast) must have met with such erratic environments that it needed to maintain a global mechanism for exploiting genome-wide variation."
By providing yeast with a way to respond to fluctuating environments, the prion switch may offer a significant evolutionary advantage.
"Though we haven’t shown it yet, selective pressure should operate to ‘fix’ the advantageous genes, which could then be read and translated at all times," said Lindquist.
Prion mechanisms could be more common than previously suspected and exert an important influence on the rates and mechanisms of evolutionary change.
"We need to expand our understanding of inheritance," said Lindquist. "It involves much more than a certain nucleic acid sequence of DNA."
Susan L. Lindquist is the Albert D. Lasker Professor of Medical Sciences, Department of Molecular Genetics & Cell Biology at the University of Chicago and a Howard Hughes Medical Institute Investigator. Her co-author is Heather L. True, a Fellow in the Department of Molecular Genetics & Cell Biology at the University of Chicago.
The above post is reprinted from materials provided by University Of Chicago Medical Center. Note: Materials may be edited for content and length.
Cite This Page: | https://www.sciencedaily.com/releases/2000/09/000928070638.htm |
4.5625 | Graphing linear inequalities
When you are graphing inequalities, you will graph the ordinary linear functions just like we done before. The difference is that the solution to the inequality is not the drawn line but the area of the coordinate plane that satisfies the inequality.
The linear inequality divides the coordinate plane into two halves by a boundary line (the line that corresponds to the function). One side of the boundary line contains all solutions to the inequality
The boundary line is dashed for > and < and solid for ≥ and ≤.
Y ≤ 2x - 4
Here you can see that one side is colored grey and the other side is colored white, to determined which side that represent y ≤ 2x - 4, test a point.
We test the point (3;0) which is on the grey side.
$$0\leq 2\cdot 3-4$$
The grey side is the side that symbolizes the inequality y ≤ 2x - 4.
Graph the inequality y > 2 - 2x | http://www.mathplanet.com/education/pre-algebra/graphing-and-functions/graphing-linear-inequalities |
4.0625 | You Are Here
Activity 4: Story - We Are All One
Activity time: 10 minutes
Materials for Activity
Preparation for Activity
- Read through the story a few times. If at all possible, consider telling the story rather than reading it. Practice telling it aloud. Try changing your voice when you are speaking as the ant or the centipede.
Description of Activity
Before you begin telling the story, "We Are All One," look around the room and make eye contact with each person.
Including All Participants
There are children for whom it is very difficult to sit still, even when they are paying attention to what is happening around them. This can be frustrating for teachers, as well as for the children who find themselves in situations where they are expected to maintain stillness for prolonged periods of time. If there are children in the group for whom this is the case, consider adopting the use of "fidget objects" as described in the Leader Resources section. Fidget objects can provide a non-disruptive outlet for a child's need to move. | http://www.uua.org/re/tapestry/children/tales/session1/123100.shtml |
4 | Leveled Literacy Intervention: Overview
The Fountas & Pinnell Leveled Literacy Intervention System (LLI) is a small-group, supplementary literacy intervention designed to help teachers provide powerful, daily, small-group instruction for the lowest achieving students at their grade level. Through systematically designed lessons and original, engaging leveled books, LLI supports learning in both reading and writing, helps students expand their knowledge of language and words and how they work. The goal of LLI is to bring students to grade level achievement in reading.
Lessons across the seven systems progress from level A (beginning reading in kindergarten) through level Z (represents competencies at the middle and secondary school level) on the F&P Text Level Gradient™.
LLI is designed to be used with small groups of students who need intensive support to achieve grade-level competency.
Each Level of LLI provides:
- Combination of reading, writing, and phonics/word study.
- Emphasis on teaching for comprehending strategies.
- Explicit attention to genre and to the features of nonfiction and fiction texts.
- Special attention to disciplinary reading, literature inquiry, and writing about reading.
- Specific work on sounds, letters, and words in activities designed to help students notice the details of written language and learn how words "work."
- Close reading to deepen and expand comprehension.
- Explicit teaching of effective and efficient strategies for expanding vocabulary.
- Explicit teaching for fluent and phrased reading.
- Use of writing about reading for the purpose of communicating and learning how to express ideas for a particular purpose and audience using a variety of writing strategies.
- Built-in level-by-level descriptions and competencies from The Continuum of Literacy Learning, PreK-8 (2011) to monitor student progress and guide teaching.
- Communication tools for informing parents about what children are learning and how they can support them at home.
- Technology support for assessment, record keeping, lesson instruction, and home and classroom connections.
- Detailed analysis of the characteristics of text difficulty for each book.
Explore the resources below to learn more about Leveled Literacy Intervention and how it turns struggling readers into successful readers!
- System levels and components
- LLI Little Books
- Frequently Asked Questions (FAQs)
- Research & Efficacy Study
- LLI user’s forum discussions
- LLI awareness events
- Product updates
- Request a sampler
- Ordering Information
– Orange System Levels A-C (Kindergarten)
– Orange Booster Packs: Levels D and E
– Green System, Levels A-J (Grade 1)
– Green Booster Packs: Level K
– Blue System, Levels C-N (Grade 2)
– Red System, Levels L-Q (Grade 3)
– Gold System, Grade 4 (Levels O–T)
– Purple System, Levels R-W (Grade 5)
– NEW! Teal System, Grade 6-12 (Levels U–Z)
Irene C. Fountas & Gay Su Pinnell
- Research & Efficacy Study
- Upcoming F&P Events
- Help Resources
- Product Updates
- Order Information
- Request Samplers
- F&P Online Resources Login » | http://www.heinemann.com/fountasandpinnell/lli_overview.aspx |
4 | Uh oh. There's a chemistry test coming up and your teacher wants you to memorize the entire periodic table of the elements. Great. But luckily, with a bit of time and dedication, you can make recalling the table like recalling the alphabet. It'll be as easy as A, B, C!
1Print out a copy of the periodic table. This will be your Bible for the next couple of weeks. Wherever you go, it will go with you. It's advisable to print out more than one copy. You can highlight and code one however you want and use the next to start over or check if your devices have worked.
- Print out a copy. Then, especially if you're a visual or kinesthetic learner, copy it down yourself. It's easier to know the ins and outs of something you've done yourself; the chart will seem less foreign if it's made by you.
2Breakdown the table into smaller sections to learn it. Most charts are already divided by color and type of element, but if that's not working for you, find your own way. You could go by row, column, atomic weight, or simply easiest to hardest. Find patterns that stick out to you.
3Zap into your free time. Try learning the periodic table when not much else can be done, e.g. traveling by public transport or just waiting in the line for something. If you don't have the chart handy (which you should), go over it in your head, concentrating on the ones that are eluding your memory.
- Stick with it! Learn a few every day and always review the old ones! If you don't review and quiz yourself, you will forget.
1Create associations. For each element, memorize a short slogan, story or fact that is related to the metal you need to memorize the symbol for. For example, Argentina was named after the metal silver (Argentum -- Ag) because when the Spanish landed there, they thought that the country had lots of silver.
- Sometimes, you might make something funny to remember the element -- for example," 'EY! YOU! Give me back my GOLD!" could help as well since the symbol for gold is Au.
2Go for mnemonic devices. That means you'll be using words to associate with each element. They often come in strings or rhymes. Lilly's NAna Kills RuBbish CreatureS FRanticly is an example of a mnemonic device to help remember the alkali metals.
- Ignore the easy ones. You're probably pretty confident that hydrogen is "H." Concentrate on the ones that are giving you grief. Here's an example: Darmstadtium is "Ds," right? If you want a mnemonic for that one, try "DARN! STATS for my game were all lost on my Nintendo 'DS' because the power went out!".
3Use pictures. Many people with ridiculously good memories use pictures to associate. Why does everybody know that A is for Apple? Our brains associate words with pictures automatically. Assign each element with a picture -- anything that makes sense to you.
- Give the items in your house an element. Label them. Let's say your chair is hydrogen. Label it with a hydrogen bomb, picturing it blowing up. Give your TV a mouth -- it's oxygen and it's breathing. When you go to take your test, close your eyes and walk through your house, recalling all your associations.
4Memorize in song. If Daniel Radcliffe can do it, so can you. You can either create your own or go on the internet and watch the gems that others have created. If you thought one version is a lot, you'll be pleasantly surprised.
- And just for people like you, there are karaoke versions, too, to help you check your progress. Isn't the internet amazing?
Origins and Patterns, etc.
1Know the Latin names. All symbols can be regarded as English abbreviations, except for ten that have Latin names and abbreviations and one (Wolfram) whose name can be considered of German origin. Excluding Antimony and Tungsten, these are all important and frequently used elements.
- Knowing the Latin names as well enables you to decipher most Latin names of inorganic chemicals. In most Romance languages (French, Italian, Spanish, etc.), the present day word is derived from the Latin>
2Zero in on the differences. Element symbols tend to have two letters. This is the full list of element symbols that have only one letter:
- Except for may be V, W and Y, these are all important elements on this table. The symbols D and T (not in this list) are sometimes used for the heavier isotopes of Hydrogen (H). D2O is heavy water.
3Know which ones come in threes. Elements may have three letters, though. You are probably not required to learn these. These are all highly radioactive, newly discovered (created) elements, that are likely to get new names when the discoveries are confirmed. Professional chemists often don't use these names either, they call it "Element 113" for instance. Just for the heck of it, here is the full list:
4Spot the unique ones. The last elements that got their names are Flerovium and Livermorium, 114 and 116, whose names were changed from Ununquadium and Ununhexium respectively.
The Entire List
Printable Periodic Table
Questions and Answers
Give us 3 minutes of knowledge!
- Some websites offer quizzes on the periodic table. If you don't have a friend nearby to help, it's a good alternative.
- The noble gases in their correct downward order are important because of their electron configuration.
- Test yourself with learning which elements are metals, non metals and the groups the elements are in the what a set of elements are known as, e.g. the noble gases and alkaline metals.
- Repeat the elements in your head, over and over, wherever you are.
- You probably won't be asked about the newer, man-made elements. These are newly discovered, man-made, radioactive and possibly dangerous elements. Elements beyond 112, except 114 and 116, have not even been named, and only exist briefly after their creation.
- Actinoids = Three Planets: Uranus, Neptune, and Pluto. Amy Cured Berkeley, California. Einstein and Fermi Made Noble Laws.
- Lanthanoids = Ladies Can't Put Nickels Properly in Slot-machines. Every Girl Tries Daily, However, Every Time You Look.
- Make your own periodic table song. Most of the periodic table songs end at 10. You can find your very own catchy beat and make your own periodic table song that surpasses 10.
- Be careful not to mix up the elements with the wrong symbols! You have to know them together.
- Remember that the first letter of a symbol is a capital letter and the letter/letters after the capital letter are lowercase.
In other languages:
Italiano: Memorizzare la Tavola Periodica degli Elementi, Español: memorizar la tabla periódica, Français: mémoriser le tableau périodique des éléments, Português: Memorizar a Tabela Periódica, Русский: запомнить таблицу менделеева, 中文: 记忆元素周期表, Deutsch: Das Periodensystem auswendig lernen, Nederlands: Het periodiek systeem uit je hoofd leren, Bahasa Indonesia: Menghafal Tabel Periodik
Thanks to all authors for creating a page that has been read 542,690 times. | http://www.wikihow.com/Memorise-the-Periodic-Table |
4 | Electricity is a general term that refers to the presence and flow of electric charge. For example, electricity is present in lightning, electrical outlets, and static electricity. Recognized since ancient Greece, electricity wasn't harnessed by engineers until the late 19th century, when it began to be used to provide power to homes and businesses. Electricity is used today to power anything from light bulbs to computers to cars. Electricity is most commonly generated by electromagnetic induction, by moving a loop of wire or a disc of copper between the poles of magnet. Electricity can be generated from burning fuels such as natural gas, oil, and coal, from nuclear fission reactions, from windmills or hydroelectric plants, or from solar panels.
For the normal force in the drawing to have the same magnitude at all points on the vertical track, the stunt driver must adjust the speed to be different at different points. Suppose, for example, that the track has a radius of 3.98 m and that the driver goes past point 1 at the bottom with a speed of 24.4 m/s. What speed must she have at point 3, so that the normal force at the top has the same magnitude as it did at the bottom?•
A Hollow conducting sphere has an inner radius of 15 cm, an outer radius of 32 cm, and an excess charge of +6.5 mC
a) What is the surface charge density of the sphere
b) Set up the eq to determine the electric field 17 cm from the center of the sphere (don't solve)
c) What if b was non-conduting?
d) What is the electric field 22 cm from the center of the sphere? 48?
e) What is the electric FLUX 48 cm from the center of the sphere? 12?
f) What is the electic potential 100 cm from the center of the sphere? Use ref electric potential of zero volts at infinity•
A student walking to class on a cold day (T=0 C) finds a silver ring with an inner diameter of D=1.8 cm. The silver has a coefficient of expansion of a=18.7 x 10^ -6
a) Input an expression for the rings inner diameter Dh when the
student warms it up to their body temperature,
b) What is the change in diameter in mm if Tb = 37 C
For a) I tried:
All were WRONG
For b) I tried:
All were WRONG too
A double-slit has been set up with light falling on a screen where an interference pattern can be seen. Calculate the distance (in cm) between the central and first order bright fringes for 633-nm light falling on double slits separated by 0.0800 mm, located 3.00 m from a screen.•
Capacitors can be made on semiconductor integrated circuits essentially as parallel-plate capacitors, although it is difficult to achieve high capacitance values (and also challenging to create precise values). Assume that you want a 16.0 pF capacitor and that the layers of the integrated circuit allow you to have a spacing of 1.20 μm. Assume that the material has a dielectric constant of 3.20. What is the necessary area of the plates on this integrated circuit capacitor?•
The bodies A and B each weigh 32.2 lb and are connected by a rigid bar whose mass may be neglected. The two planes are smooth and the velocity of A is 5.0 fps to the right in the position shown. Determine the acceleration of A at this instant. (Note: AB is a two-force member)•
Join Chegg Study and get:
Guided textbook solutions created by Chegg experts
Learn from step-by-step solutions for 9,000 textbooks in Math, Science, Engineering, Business and more
24/7 Study Help
Answers in a pinch from experts and subject enthusiasts all semester long
In science there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important science concepts and terms are, and even once you’ve identified them you still need to understand what they mean. To help you learn and understand key science terms and concepts, we’ve identified some of the most important ones and provided detailed definitions for them, written and compiled by Chegg experts. | http://www.chegg.com/homework-help/definitions/electricity-2?cp=CHEGGFREESHIP |
4.0625 | A map is oriented when it is made to correspond to the ground it represents. Remember, on a topographic map, north is the top of the map.
There are four ways to orient a map:
I. BY COMPASS
With a protractor, draw a magnetic north line anywhere on your map. The declination diagram in the margin of the map will give you the direction and size of the angle between grid north and magnetic north.
Do not use the margin diagram itself as the angles are often exaggerated by the cartographer so that the numerical values for the angle can be inserted.
Place the compass on the magnetic north line and turn the map and compass together slowly until the needle points to magnetic north on the map.
The adjacent Diagram I shows a compass, oriented to north, placed on top of a line drawn on the map pointing to magnetic north.
II. BY DISTANT OBJECTS
If you know your position on the map and can identify the position of some distant object, turn the map so that it corresponds with the ground.
As shown in Diagram II below, the map reader uses a church, identified by eye and on the map, to orient the map to the ground.
III. BY WATCH AND SUN - (in the Northern Hemisphere)
If Daylight Saving Time is in effect (in summer), first set your watch back to Standard Time. Place the watch flat with the hour hand pointing toward the sun. True South is midway between the hour hand and XII. True North is directly opposite. Note; this method is very approximate.
Diagram III below shows a watch with the hour hand pointing to "3" and pointing to the sun. South is therefore determined to be midway between "12" and "3".
IV. BY THE STARS
In latitudes below 60° North, the bearing of Polaris is never more than 2° from True North.
Diagram IV below shows how to locate Polaris using the two stars that form the front of the "dipper" of the Big Dipper. Polaris can be found at the end of a line that joins these two stars that is extended beyond the open end of the dipper at a distance 5 times the distance between these two stars.
- Date Modified: | http://www.nrcan.gc.ca/earth-sciences/geography/topographic-information/maps/9797 |
4.125 | (adj.)Refers to the transmission of data in just one direction at a time. For example, a walkie-talkie is a half-duplex device because only one party can talk at a time. In contrast, a telephone is a full-duplex device because both parties can talk simultaneously. Duplex modes often are used in reference to network data transmissions.
Some modems contain a switch that lets you select between half-duplex and full-duplex modes. The correct choice depends on which program you are using to transmit data through the modem. In half-duplex mode, each character transmitted is immediately displayed on your screen. (For this reason, it is sometimes called local echo -- characters are echoed by the local device). In full-duplex mode, transmitted data is not displayed on your monitor until it has been received and returned (remotely echoed) by the other device. If you are running a communications program and every character appears twice, it probably means that your modem is in half-duplex mode when it should be in full-duplex mode, and every character is being both locally and remotely echoed.
Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now.
Webopedia's student apps roundup will help you to better organize your class schedule and stay on top of assignments and homework. Read More »20 Ways to Shorten a URL
If you need to shorten a long URL try this list of 20 free online redirection services. Read More »Top 10 Tech Terms of 2015
The most popular Webopedia definitions of 2015. Read More »
This Webopedia study guide describes the different parts of a computer system and their relations. Read More »Network Fundamentals Study Guide
Networking fundamentals teaches the building blocks of modern network design. Learn different types of networks, concepts, architecture and... Read More »The Five Generations of Computers
Learn about each of the five generations of computers and major technology developments that have led to the current devices that we use today. Read More » | http://www.webopedia.com/TERM/H/half_duplex.html |
4 | In this 1984 historical photo from the U.S. space agency, the X-29 Flight Research Aircraft features one of the most unusual designs in aviation history. Demonstrating forward swept wing technology, this aircraft investigated numerous advanced aviation concepts and technologies.
The fighter-size X-29 explored the use of advanced composites in aircraft construction, variable camber wing surfaces, an unique forward-swept-wing and its thin supercritical airfoil, and strake flaps. The X-29 also demonstrated three specific aerodynamic effects: canard effects, active controls, and aeroelastic tailoring. Canard effects use canards (small wings) to function as another control surface to manipulate air flow. Active controls enable an airplane to pull air across the plane in specific directions rather than passively letting the air flow over it. Aeroelastic tailoring allows parts of an aircraft to flex slightly when air hits it in a certain way to allow for maximum flexibility of air flow.
Although the X-29 was one of the most instable of the X-series in maneuvering capabilities, it was controlled by a computerized fly-by-wire flight control system that overcame the instability going further than any other aircraft testing the limits of computer controls. The first flight was December 14, 1984. | http://www.space.com/21793-x29-in-flight.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+spaceheadlines+%28SPACE.com+Headline+Feed%29 |
4.40625 | Calculating variance allows you to measure how far a set of numbers is spread out. Variance is one of the descriptors of probability distribution, and it describes how far numbers lie from the mean. Variance is often used in conjunction with standard deviation, which is the square root of the variance. If you want to know how to calculate the variance of a set of data points, just follow these steps.
Help Calculating Variance
1Write down the formula for calculating variance. The formula for measuring an unbiased estimate of the population variance from a fixed sample of n observations is the following:(s2) = Σ [(xi - x̅)2]/n - 1. The formula for calculating the variance in an entire population is the same as this one except the denominator is n, not n - 1, but it should not be used any time you are working with a finite sample of observations. Here's what the parts of the formula for calculating variance mean:
- s2 = Variance
- Σ = Summation, which means the sum of every term in the equation after the summation sign.
- xi = Sample observation. This represents every term in the set.
- x̅ = The mean. This represents the average of all the numbers in the set.
- n = The sample size. You can think of this as the number of terms in the set.
2Calculate the sum of the terms. First, create a chart that has a column for observations (terms), the mean (x̅), the mean subtracted from the terms (xi - x̅) and then the square of these terms [(xi - x̅)2)]. After you've made the chart and placed all of the terms in the first column, simply add up all of the numbers in the set. Let's say you're working with the following numbers: 17, 15, 23, 7, 9, 13. Just add them up: 17 + 15 + 23 + 7 + 9 + 13 = 84.
3Calculate the mean of the terms. To find the mean of any set of terms, simply add up the terms and divide the result by the number of terms. In this case, you already know that the sum of the terms is 84. Since there are 6 terms, just divide 84 by 6 to find the mean. 84/6 = 14. Write "14" all the way down the column for the mean.
4Subtract the mean from each term. To fill the third column, simply take each term from the sample observations and subtract it from 14, the sample mean. You can check your work by adding up all of the results and confirming that they add up to zero. Here's how to subtract each sample observation from the average:
- 17 - 14 = 3
- 15 - 14 = 1
- 23 - 14 = 9
- 7 - 14 = -7
- 9 - 14 = -5
- 13 - 14 = -1
5Square each result. Now that you've subtracted the average from each sample observation, simply square each result and write the answer in the fourth column. Remember that all of your results will be positive. Here's how to do it:
- 32 = 9
- 12 = 1
- 92 = 81
- (-7)2 = 49
- (-5)2 = 25
- (-1)2 = 1
6Calculate the sum of the squared terms. Now simply add up all of the new terms. 9 + 1 + 81 + 49 + 25 + 1 = 166
7Substitute the values into the original equation. Just plug in the values into the original equation, remembering that "n" represents the number of data points.
- s2 = 166/(6-1)
8Solve. Simply divide 166 by 5. The result is 33.2 If you'd like to find the standard deviation, simply find the square root of 33.2. √33.2 = 5.76. Now you can interpret this data in a larger context. Usually, the variance between two sets of data are compared, and the lower number indicates less variation within that data set.
Questions and Answers
Give us 3 minutes of knowledge!
- Since it is difficult to interpret the variance, this value is usually only calculated as a start in calculating the standard deviation.
In other languages:
Español: calcular la varianza, Italiano: Calcolare la Varianza, Deutsch: Varianz berechnen, Français: calculer la variance, Русский: посчитать дисперсию случайной величины, 中文: 计算方差, Português: Calcular a Variância, Nederlands: Variantie berekenen, Bahasa Indonesia: Menghitung Variasi
Thanks to all authors for creating a page that has been read 795,615 times. | http://www.wikihow.com/Calculate-Variance |
4.21875 | A rainbow-like feature known as a ‘glory’ has been seen by ESA’s Venus Express orbiter in the atmosphere of our nearest neighbour – the first time one has been fully imaged on another planet.
Rainbows and glories occur when sunlight shines on cloud droplets – water particles in the case of Earth. While rainbows arch across wide swathes of the sky, glories are typically much smaller and comprise a series of coloured concentric rings centred on a bright core.
Global Satellite-based Earth Observation Market 2016-2020
Glories are only seen when the observer is situated directly between the Sun and the cloud particles that are reflecting sunlight. On Earth, they are often seen from aeroplanes, surrounding the shadow of the aircraft on the clouds below, or around the shadow of climbers atop misty mountain peaks.
A glory requires two characteristics: the cloud particles are spherical, and therefore most likely liquid droplets, and they are all of a similar size.
The atmosphere of Venus is thought to contain droplets rich in sulphuric acid. By imaging the clouds with the Sun directly behind the Venus Express spacecraft, scientists hoped to spot a glory in order to determine important characteristics of the cloud droplets.
They were successful. The glory in the images here was seen at the Venus cloud tops, 70 km above the planet’s surface, on 24 July 2011. It is 1200 km wide as seen from the spacecraft, 6000 km away.
From these observations, the cloud particles are estimated to be 1.2 micrometres across, roughly a fiftieth of the width of a human hair.
The fact that the glory is 1200 km wide means that the particles at the cloud tops are uniform on this scale at least.
The variations of brightness of the rings of the observed glory is different than that expected from clouds of only sulphuric acid mixed with water, suggesting that other chemistry may be at play.
One idea is that the cause is the “UV-absorber”, an unknown atmospheric component responsible for mysterious dark markings seen in the cloud tops of Venus at ultraviolet wavelengths. More investigation is needed to draw a firm conclusion.
Your company’s press release on ASDNews and to thousands of other journalists and editors? Use our ASDWire press release distribution service.
Source : European Space Agency (ESA) | http://www.asdnews.com/news-53939/Venus_glory.htm |
4.09375 | Did you know that you can use an equation to solve a problem? Creating a model for a problem may also include methods such as drawing a diagram or picture or making a table or chart.
Take a look at this dilemma.
The triangles below were constructed using toothpicks. Determine the number of toothpicks needed to construct twenty triangles.
Do you know how to figure this out? Pay attention to this Concept. Then you will know how to solve this dilemma.
Sometimes if you think of a problem in terms of words and parts it will be easier to write an equation and solve it. Writing a verbal model is similar to making a plan for solving a problem. When you write a verbal model, you are paraphrasing the information stated in the problem. After writing a verbal model, insert the values from the problem to write an equation. Then, use mental math or an inverse operation to solve it.
Take a look at this situation.
Monica purchased a pair of tennis shoes on sale for $65.99. The shoes were originally $99.00. Use a verbal model to write and solve an equation to determine the amount of money Monica saved by purchasing the shoes on sale
First write a verbal model to represent the problem.
Let “” represent the amount saved.
Solution: Recall that to solve for “,” complete the inverse operation. Since addition is used in the equation, use subtraction to solve.
It makes sense to subtract 65.99 from 99.00.
This is the answer.
Write an equation for each situation and solve it.
Mary had $12.00 and she spent some amount. She has $4.50 left over. How much did she spend?
John spent twice as much as Mary did. How much did he spend?
A number and sixteen is equal to forty-five.
Now let's go back to the dilemma from the beginning of the Concept.
As you can see, three toothpicks were needed to construct one triangle. Two more were needed to construct the second triangle. Therefore, five toothpicks were used to make two triangles. Continue to make more triangles along the row. Each time you construct a new triangle, record the number of toothpicks used on a chart.
|Triangle #:||Toothpick #|
Looking at the table, you can identify a pattern. You can see that two toothpicks are needed each time a new triangle is constructed. You can write a verbal model to express this amount.
Total Number of Toothpicks Needed = Two Times the Number of Triangles + One Toothpick
Let number of triangles
Total Number of Toothpicks Needed
To determine the number of toothpicks needed to construct twenty triangles, substitute twenty for the variable.
41 toothpicks are needed to construct twenty triangles.
- a group of numbers, operations and variables where the quantity on one side of the equal sign is the same as the quantity on the other side of the equal sign.
- Inverse Operation
- the opposite operation. Equation can often be solved by using an inverse operation.
- Verbal Model
- using words to decipher the mathematical information in a problem. An equation can often be written from a verbal model.
Here is one for you to try on your own.
The cost to run a thirty second commercial on prime time television is seven hundred fifty-thousand dollars. Use a verbal model to write and solve an equation to determine the cost per second.
Let “” represent the unknown cost per second.
Solution: To solve, divide 750,000 by 30.
Now remember that we were talking about money in this problem. So our answer needs to be written as a money amount.
The answer is that is costs $25,000 per second for a thirty second commercial.
Directions: Write an equation for each situation and then solve for the variable. Each problem will have two answer to it.
1. An unknown number and three is equal to twelve.
2. John had a pile of golf balls. He lost nine on the course. If he returned home with fourteen golf balls, how many did he start with?
3. Some number and six is equal to thirty.
4. Jessie owes her brother some money. She earned nine dollars and paid off some of her debt. If she still owes him five dollars, how much did she owe to begin with?
5. A farmer has chickens. Six of them went missing during a snowstorm. If there are twelve chickens left, how many did he begin with before the storm?
6. Gasoline costs four dollars per gallon. Kerry put many gallons in his car over a long car trip. If he spent a total of $140.00 on gasoline, how many gallons did he need for the trip?
7. Twenty-seven times a number is 162. What is the number?
8. Marsha divided cookies into groups of 12. If she had 6 dozen cookies when she was done, how many cookies did she start with?
9. The coach divided the students into five teams. There were fourteen students on each team. How many students did the coach begin with?
10. A number plus nineteen is equal to forty. | http://www.ck12.org/algebra/Sentences-as-Single-Variable-Equations/lesson/Solve-Real-World-Problems-by-Writing-and-Solving-Single-Variable-Equations/r12/ |
4 | The change in albedo of arid lands is an indicator of changes in their condition and quality, including density of vegetative cover, erosion, deposition, surficial soil moisture, and man-made change. In general, darkening of an arid land surface indicates an increase in land quality while brightening indicates a decrease in quality, primarily owing to changes in vegetation. Landsat multiband images taken on different dates can be converted to black-and-white albedo images. Subtraction of one image from another, pixel by pixel, results in an albedo change map that can be density sliced to show areas that have brightened or darkened by selected percentages. These maps are then checked in the field to determine the reasons for the changes and to evaluate the changes in land condition and quality. The albedo change mapping technique has been successfully used in the arid lands of western Utah and northern Arizona and has recently been used for detection of coal strip mining activities in northern Alabama. ?? 1983.
Additional publication details
Space platform albedo measurements as indicators of change in arid lands | https://pubs.er.usgs.gov/publication/70011446 |
4.3125 | In computer programming, a null-terminated string is a character string stored as an array containing the characters and terminated with a null character (
'\0', called NUL in ASCII). Alternative names are C string, which refers to the C programming language and ASCIIZ (note that C strings do not imply the use of ASCII).
The length of a C string is found by searching for the (first) NUL byte. This can be slow as it takes O(n) (linear time) with respect to the string length. It also means that a NUL cannot be inside the string, as the only NUL is the one marking the end.
Null-terminated strings were produced by the
.ASCIZ directive of the PDP-11 assembly languages and the
ASCIZ directive of the MACRO-10 macro assembly language for the PDP-10. These predate the development of the C programming language, but other forms of strings were often used.
At the time C (and the languages that it was derived from) was developed, memory was extremely limited, so using only one byte of overhead to store the length of a string was attractive. The only popular alternative at that time, usually called a "Pascal string" (though also used by early versions of BASIC), used a leading byte to store the length of the string. This allows the string to contain NUL and made finding the length need only one memory access (O(1) (constant) time). However, C designer Dennis Ritchie chose to follow the convention of NUL-termination, already established in BCPL, to avoid the limitation on the length of a string caused by holding the count in an 8- or 9-bit slot, and partly because maintaining the count seemed, in his experience, less convenient than using a terminator.
This had some influence on CPU instruction set design. Some CPUs in the 1970s and 1980s, such as the Zilog Z80 and the DEC VAX, had dedicated instructions for handling length-prefixed strings. However, as the NUL-terminated string gained traction, CPU designers began to take it into account, as seen for example in IBM's decision to add the "Logical String Assist" instructions to the ES/9000 520 in 1992.
|This section requires expansion. (November 2011)|
- Determining the length of a string
- Copying one string to another
- Appending (concatenating) one string to another
- Finding the first (or last) occurrence of a character within a string
- Finding within a string the first occurrence of a character in (or not in) a given set
- Finding the first occurrence of a substring within a string
- Comparing two strings lexicographically
- Splitting a string into multiple substrings
- Formatting numeric or string values into a printable output string
- Parsing a printable string into numeric values
- Converting between single-byte and wide character string encodings
- Converting single-byte or wide character strings to and from multi-byte character strings
While simple to implement, this representation has been prone to errors and performance problems.
The NUL termination has historically created security problems. A NUL byte inserted into the middle of a string will truncate it unexpectedly. A common bug was to not allocate the additional space for the NUL, so it was written over adjacent memory. Another was to not write the NUL at all, which was often not detected during testing because a NUL was already there by chance from previous use of the same block of memory. Due to the expense of finding the length, many programs did not bother before copying a string to a fixed-size buffer, causing a buffer overflow if it was too long.
The inability to store a NUL requires that string data and binary data be kept distinct and handled by different functions (with the latter requiring the length of the data to also be supplied). This can lead to code redundancy and errors when the wrong function is used.
The speed problems with finding the length can usually be mitigated by combining it with another operation that is O(n) anyway, such as in
strlcpy. However, this does not always result in an intuitive API.
Null-terminated strings require of the encoding that it does not use the zero code anywhere.
It is not possible to store every possible ASCII or UTF-8 string in a null-terminated string, as the encoding of the NUL character is a zero byte. However, it is common to store the subset of ASCII or UTF-8 -- every character except the NUL character -- in null-terminated strings. Some systems use "modified UTF-8" which encodes the NUL character as two non-zero bytes (0xC0, 0x80) and thus allow all possible strings to be stored. (this is not allowed by the UTF-8 standard as it is a security risk. A C0,80 NUL might be seen as a string terminator in security validation and as a character when used)
UTF-16 uses 2-byte integers and since either byte may be zero, cannot be stored in a null-terminated byte string. However, some languages implement a string of 16-bit UTF-16 characters, terminated by a 16-bit NUL character. (Again the NUL character, which encodes as a single zero code unit, is the only character that cannot be stored).
Many attempts have been made to make C string handling less error prone. One strategy is to add safer and more useful functions such as
strlcpy, while deprecating the use of unsafe functions such as
gets. Another is to add an object-oriented wrapper around C strings so that only safe calls can be done.
On modern systems memory usage is less of a concern, so a multi-byte length is acceptable (if you have so many small strings that the space used by this length is a concern, you will have enough duplicates that a hash table will use even less memory). Most replacements for C strings use a 32-bit or larger length value. Examples include the C++ Standard Template Library
std::string, the Qt
QString, the MFC
CString, and the C-based implementation
CFString from Core Foundation as well as its Objective-C sibling
NSString from Foundation, both by Apple. More complex structures may also be used to store strings such as the rope.
- Dennis M. Ritchie (1993). [The development of the C language]. Proc. 2nd History of Programming Languages Conf.
- Kamp, Poul-Henning (25 July 2011), "The Most Expensive One-byte Mistake", ACM Queue 9 (7), ISSN 1542-7730, retrieved 2 August 2011
- Richie, Dennis (2003). "The Development of the C Language". Retrieved 9 November 2011.
- Rain Forest Puppy (9 September 1999). "Perl CGI problems". Phrack Magazine (artofhacking.com) 9 (55): 7. Retrieved 6 January 2012.
- "UTF-8, a transformation format of ISO 10646". Retrieved 19 September 2013.
- "Unicode/UTF-8-character table". Retrieved 13 September 2013.
- Kuhn, Markus. "UTF-8 and Unicode FAQ". Retrieved 13 September 2013. | https://en.wikipedia.org/wiki/Null-terminated_string |
4.28125 | Proficient readers ask themselves questions about a text. Asking and answering questions like "what's important here?" and "who's speaking now?" helps readers interact with the text and engage prior knowledge. They are also addressing Common Core State Standards related to key ideas and details and the integration of knowledge and ideas. Teach struggling readers' how to engage in self-questioning to increase engagement and comprehension. Use hypertext and collaborative documents to support your students' experimentation with this approach.
Click here for a version of this slideshow which can be used with a screen-reader and is 508 Compliant.
Using a self-questioning strategy can encourage struggling learners to monitor their understanding of the text. Your clear explanations can highlight critical features of the self-questioning approach, especially when you integrate a range of technology tools as suggested below. | http://powerupwhatworks.org/strategy-guide/self-questioning |
4.0625 | A broad, 4-kilometer-tall feature on the seafloor about 1500 kilometers east of Japan is the world’s largest volcano, a new analysis suggests. At its tallest point, Tamu Massif (at lower left and center in main image; oblique view in inset) lies more than 2 km below the ocean’s surface. Unlike most volcanic seamounts, which are steep and typically no more than a few tens of kilometers across, the gently sloping Tamu Massif covers 310,000 square kilometers—about the same as the British Isles, or the base of Mars’s Olympus Mons, the solar system’s largest known volcano. (Its base is shown in dark purple at lower right, for comparison.) The massif’s slopes are exceptionally shallow, often less than 1°, thanks to lava that flowed freely before hardening. Researchers think the Tamu Massif is a single volcano because rock samples (labeled dots) have similar chemistry, and seismic surveys show that broad layers of rock emanate from the center of the feature. Today, Tamu Massif sits far from the edge of the Pacific tectonic plate and is presumed dead, but 145 million years ago the caldera plumbed the intersection of three tectonic plates, the researchers note today in Nature Geoscience. They haven’t finished dating rock samples drilled from the peak, but it’s possible that the entire seamount could have been formed in a million years or less. | http://www.sciencemag.org/news/2013/09/scienceshot-massive-undersea-volcano-world-s-largest |
4.0625 | It does this by taking energy from a power supply and controlling the output to match the input signal shape but with a larger amplitude. In this sense, an amplifier modulates the output of the power supply to make the output signal stronger than the input signal. An amplifier is effectively the opposite of an attenuator: while an amplifier provides gain, an attenuator provides loss.
An amplifier can either be a separate piece of equipment or an electrical circuit within another device. The ability to amplify is fundamental to modern electronics, and amplifiers are widely used in almost all electronic equipment. The types of amplifiers can be categorized in different ways. One is by the frequency of the electronic signal being amplified; audio amplifiers amplify signals in the audio (sound) range of less than 20 kHz, RF amplifiers amplify frequencies in the radio frequency range between 20 kHz and 300 GHz. Another is which quantity, voltage or current is being amplified; amplifiers can be divided into voltage amplifiers, current amplifiers, transconductance amplifiers, and transresistance amplifiers. A further distinction is whether the output is a linear or nonlinear representation of the input. Amplifiers can also be categorized by their physical placement in the signal chain.
The first practical electronic device that could amplify was the Audion (triode) vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers. The terms "amplifier" and "amplification" (from the Latin amplificare, 'to enlarge or expand') were first used for this new capability around 1915 when triodes became widespread. For the next 50 years, vacuum tubes were the only devices that could amplify. All amplifiers used them until the 1960s, when transistors appeared. Most amplifiers today use transistors, though tube amplifiers are still produced.
- 1 Figures of merit
- 2 Amplifier types
- 2.1 Power amplifier
- 2.2 Vacuum-tube (valve) amplifiers
- 2.3 Transistor amplifiers
- 2.4 Magnetic amplifiers
- 2.5 Operational amplifiers (op-amps)
- 2.6 Fully differential amplifiers
- 2.7 Video amplifiers
- 2.8 Oscilloscope vertical amplifiers
- 2.9 Distributed amplifiers
- 2.10 Switched mode amplifiers
- 2.11 Negative resistance devices
- 2.12 Microwave amplifiers
- 2.13 Musical instrument amplifiers
- 3 Classification of amplifier stages and systems
- 4 Power amplifier classes
- 4.1 Conduction angle classes
- 4.2 Class A
- 4.3 Class B
- 4.4 Class AB
- 4.5 Class C
- 4.6 Class D
- 4.7 Additional classes
- 5 Implementation
- 6 See also
- 7 References
- 8 External links
Figures of merit
Amplifier quality is characterized by a list of specifications that include:
- Gain, the ratio between the magnitude of output and input signals
- Bandwidth, the width of the useful frequency range
- Efficiency, the ratio between the power of the output and total power consumption
- Linearity, the degree of proportionality between input and output
- Noise, a measure of undesired noise mixed into the output
- Output dynamic range, the ratio of the largest and the smallest useful output levels
- Slew rate, the maximum rate of change of the output
- Rise time, settling time, ringing and overshoot that characterize the step response
- Stability, the ability to avoid self-oscillation
Amplifiers are described according to their input and output properties. They exhibit the property of gain, or multiplication factor that relates the magnitude of the output signal to the input signal. The gain may be specified as the ratio of output voltage to input voltage (voltage gain), output power to input power (power gain), or some combination of current, voltage, and power. In many cases, with input and output in the same unit, gain is unitless (though often expressed in decibels (dB)).
The four basic types of amplifiers are as follows:
- Voltage amplifier – This is the most common type of amplifier. An input voltage is amplified to a larger output voltage. The amplifier's input impedance is high and the output impedance is low.
- Current amplifier – This amplifier changes an input current to a larger output current. The amplifier's input impedance is low and the output impedance is high.
- Transconductance amplifier – This amplifier responds to a changing input voltage by delivering a related changing output current.
- Transresistance amplifier – This amplifier responds to a changing input current by delivering a related changing output voltage. Other names for the device are transimpedance amplifier and current-to-voltage converter.
In practice, amplifier power gain depends on the source and load impedances, as well as the inherent voltage and current gain. A radio frequency (RF) amplifier design typically optimizes impedances for power transfer, while audio and instrumentation amplifier designs normally optimize input and output impedance for least loading and highest signal integrity. An amplifier that is said to have a gain of 20 dB might have a voltage gain of ten times and an available power gain of much more than 20 dB (power ratio of 100)—yet actually deliver a much lower power gain if, for example, the input is from a 600 ohm microphone and the output connects to a 47 kilohm input socket for a power amplifier.
In most cases, an amplifier is linear. That is, it provides constant gain for any normal input level and output signal. If the gain is not linear, e.g., clipping of the signal, the output signal distorts. There are, however, cases where variable gain is useful. Certain signal processing applications use exponential gain amplifiers.
Many different electronic amplifier types exist that are specific to areas such as: radio and television transmitters and receivers, high-fidelity ("hi-fi") stereo equipment, microcomputers and other digital equipment, and guitar and other instrument amplifiers. Essential components include active devices, such as vacuum tubes or transistors. A brief introduction to the many types of electronic amplifiers follows.
The term power amplifier is a relative term with respect to the amount of power delivered to the load and/or provided by the power supply circuit. In general the power amplifier is the last 'amplifier' or actual circuit in a signal chain (the output stage) and is the amplifier stage that requires attention to power efficiency. Efficiency considerations lead to the various classes of power amplifier based on the biasing of the output transistors or tubes: see power amplifier classes.
Power amplifiers by application
- Audio power amplifiers: Speakers allows client to use both sides to maximize volume, but each side receives half of what it could potentially supply.
- RF power amplifier—typical in transmitter final stages (see also: Linear amplifier)
- Servo motor controllers amplify a control voltage where linearity is not important
- Piezoelectric audio amplifier—includes a DC-to-DC converter to generate the high voltage output required to drive piezoelectric speakers
Power amplifier circuits
Power amplifier circuits include the following types:
- Vacuum tube/valve, hybrid or transistor power amplifiers
- Push-pull output or single-ended output stages
Vacuum-tube (valve) amplifiers
According to Symons, while semiconductor amplifiers have largely displaced valve amplifiers for low power applications, valve amplifiers are much more cost effective in high power applications such as "radar, countermeasures equipment, or communications equipment" (p. 56). Many microwave amplifiers are specially designed valves, such as the klystron, gyrotron, traveling wave tube, and crossed-field amplifier, and these microwave valves provide much greater single-device power output at microwave frequencies than solid-state devices (p. 59).
Valves/tube amplifiers also have following uses in other areas, such as
- electric guitar amplification
- in Russian military aircraft, for their electromagnetic pulse (EMP) tolerance
- niche audio for their sound qualities (recording, and audiophile equipment)
The essential role of this active element is to magnify an input signal to yield a significantly larger output signal. The amount of magnification (the "forward gain") is determined by the external circuit design as well as the active device.
Many common active devices in transistor amplifiers are bipolar junction transistors (BJTs) and metal oxide semiconductor field-effect transistors (MOSFETs).
Applications are numerous, some common examples are audio amplifiers in a home stereo or PA system, RF high power generation for semiconductor equipment, to RF and Microwave applications such as radio transmitters.
Transistor-based amplifier can be realized using various configurations: for example with a bipolar junction transistor we can realize common base, common collector or common emitter amplifier; using a MOSFET we can realize common gate, common source or common drain amplifier. Each configuration has different characteristic (gain, impedance...).
These are devices somewhat similar to a transformer where one winding is used to control the saturation of a magnetic core and hence alter the impedance of the other winding.
They have largely fallen out of use due to development in semiconductor amplifiers but are still useful in HVDC control, and in nuclear power control circuitry to their not being affected by radioactivity.
Operational amplifiers (op-amps)
An operational amplifier is an amplifier circuit with very high open loop gain and differential inputs that employs external feedback to control its transfer function, or gain. Though the term today commonly applies to integrated circuits, the original operational amplifier design used valves, and later designs used discrete transistor circuits.
Fully differential amplifiers
A fully differential amplifier is a solid state integrated circuit amplifier that uses external feedback to control its transfer function or gain. It is similar to the operational amplifier, but also has differential output pins. These are usually constructed using BJTs or FETs.
These deal with video signals and have varying bandwidths depending on whether the video signal is for SDTV, EDTV, HDTV 720p or 1080i/p etc.. The specification of the bandwidth itself depends on what kind of filter is used—and at which point (-1 dB or -3 dB for example) the bandwidth is measured. Certain requirements for step response and overshoot are necessary for an acceptable TV image.
Oscilloscope vertical amplifiers
These deal with video signals that drive an oscilloscope display tube, and can have bandwidths of about 500 MHz. The specifications on step response, rise time, overshoot, and aberrations can make designing these amplifiers difficult. One of the pioneers in high bandwidth vertical amplifiers was the Tektronix company.
These use transmission lines to temporally split the signal and amplify each portion separately to achieve higher bandwidth than possible from a single amplifier. The outputs of each stage are combined in the output transmission line. This type of amplifier was commonly used on oscilloscopes as the final vertical amplifier. The transmission lines were often housed inside the display tube glass envelope.
Switched mode amplifiers
These nonlinear amplifiers have much higher efficiencies than linear amps, and are used where the power saving justifies the extra complexity.
Negative resistance devices
Travelling wave tube amplifiers
Traveling wave tube amplifiers (TWTAs) are used for high power amplification at low microwave frequencies. They typically can amplify across a broad spectrum of frequencies; however, they are usually not as tunable as klystrons.
Klystrons are specialized linear-beam vacuum-devices, designed to provide high power, widely tunable amplification of millimetre and sub-millimetre waves. Klystrons are designed for large scale operations and despite having a narrower bandwidth than TWTAs, they have the advantage of coherently amplifying a reference signal so its output may be precisely controlled in amplitude, frequency and phase.
Musical instrument amplifiers
An audio power amplifier is usually used to amplify signals such as music or speech. In the mid 1960s, amplifiers began to gain popularity because of its relatively low price ($50) and guitars being the most popular instruments as well. Several factors are especially important in the selection of musical instrument amplifiers (such as guitar amplifiers) and other audio amplifiers (although the whole of the sound system – components such as microphones to loudspeakers – affect these parameters):
- Frequency response – not just the frequency range but the requirement that the signal level varies so little across the audible frequency range that the human ear notices no variation. A typical specification for audio amplifiers may be 20 Hz to 20 kHz +/- 0.5 dB.
- Power output – the power level obtainable with little distortion, to obtain a sufficiently loud sound pressure level from the loudspeakers.
- Low distortion – all amplifiers and transducers distort to some extent. They cannot be perfectly linear, but aim to pass signals without affecting the harmonic content of the sound more than the human ear can tolerate. That tolerance of distortion, and indeed the possibility that some "warmth" or second harmonic distortion (Tube sound) improves the "musicality" of the sound, are subjects of great debate.
Before coming onto the music scene, amplifiers were heavily used in cinema. In the premiere of Noah's Ark in 1929, the movie's director (Michael Kurtiz) used the amplifier for a festival following the movie's premiere.
Classification of amplifier stages and systems
|This section does not cite any sources. (October 2008)|
Many alternative classifications address different aspects of amplifier designs, and they all express some particular perspective relating the design parameters to the objectives of the circuit. Amplifier design is always a compromise of numerous factors, such as cost, power consumption, real-world device imperfections, and a multitude of performance specifications. Below are several different approaches to classification:
Input and output variables
Electronic amplifiers use one variable presented as either a current and voltage. Either current or voltage can be used as input and either as output, leading to four types of amplifiers. In idealized form they are represented by each of the four types of dependent source used in linear analysis, as shown in the figure, namely:
|Input||Output||Dependent source||Amplifier type|
|I||I||Current controlled current source CCCS||Current amplifier|
|I||V||Current controlled voltage source CCVS||Transresistance amplifier|
|V||I||Voltage controlled current source VCCS||Transconductance amplifier|
|V||V||Voltage controlled voltage source VCVS||Voltage amplifier|
Each type of amplifier in its ideal form has an ideal input and output resistance that is the same as that of the corresponding dependent source:
|Amplifier type||Dependent source||Input impedance||Output impedance|
In practice the ideal impedances are only approximated. For any particular circuit, a small-signal analysis is often used to find the impedance actually achieved. A small-signal AC test current Ix is applied to the input or output node, all external sources are set to AC zero, and the corresponding alternating voltage Vx across the test current source determines the impedance seen at that node as R = Vx / Ix.
Amplifiers designed to attach to a transmission line at input and/or output, especially RF amplifiers, do not fit into this classification approach. Rather than dealing with voltage or current individually, they ideally couple with an input and/or output impedance matched to the transmission line impedance, that is, match ratios of voltage to current. Many real RF amplifiers come close to this ideal. Although, for a given appropriate source and load impedance, RF amplifiers can be characterized as amplifying voltage or current, they fundamentally are amplifying power.
One set of classifications for amplifiers is based on which device terminal is common to both the input and the output circuit. In the case of bipolar junction transistors, the three classes are common emitter, common base, and common collector. For field-effect transistors, the corresponding configurations are common source, common gate, and common drain; for triode vacuum devices, common cathode, common grid, and common plate. The common emitter (or common source, common cathode, etc.) is most often configured to provide amplification of a voltage applied between base and emitter, and the output signal taken between collector and emitter is inverted, relative to the input. The common collector arrangement applies the input voltage between base and collector, and to take the output voltage between emitter and collector. This causes negative feedback, and the output voltage tends to 'follow' the input voltage. (This arrangement is also used as the input presents a high impedance and does not load the signal source, though the voltage amplification is less than 1 (unity).) The common-collector circuit is, therefore, better known as an emitter follower, source follower, or cathode follower,
Unilateral or bilateral
An amplifier whose output exhibits no feedback to its input side is described as 'unilateral'. The input impedance of a unilateral amplifier is independent of load, and output impedance is independent of signal source impedance.
An amplifier that uses feedback to connect part of the output back to the input is a bilateral amplifier. Bilateral amplifier input impedance depends on the load, and output impedance on the signal source impedance. All amplifiers are bilateral to some degree; however they may often be modeled as unilateral under operating conditions where feedback is small enough to neglect for most purposes, simplifying analysis (see the common base article for an example).
An amplifier design often deliberately applies negative feedback to tailor amplifier behavior. Some feedback, positive or negative, is unavoidable and often undesirable—introduced, for example, by parasitic elements, such as inherent capacitance between input and output of devices such as transistors, and capacitive coupling of external wiring. Excessive frequency-dependent positive feedback can turn an amplifier into an oscillator.
Linear unilateral and bilateral amplifiers can be represented as two-port networks.
Inverting or non-inverting
Another way to classify amplifiers is by the phase relationship of the input signal to the output signal. An 'inverting' amplifier produces an output 180 degrees out of phase with the input signal (that is, a polarity inversion or mirror image of the input as seen on an oscilloscope). A 'non-inverting' amplifier maintains the phase of the input signal waveforms. An emitter follower is a type of non-inverting amplifier, indicating that the signal at the emitter of a transistor is following (that is, matching with unity gain but perhaps an offset) the input signal. Voltage follower is also non inverting type of amplifier having unity gain.
This description can apply to a single stage of an amplifier, or to a complete amplifier system.
Other amplifiers may be classified by their function or output characteristics. These functional descriptions usually apply to complete amplifier systems or sub-systems and rarely to individual stages.
- A servo amplifier indicates an integrated feedback loop to actively control the output at some desired level. A DC servo indicates use at frequencies down to DC levels, where the rapid fluctuations of an audio or RF signal do not occur. These are often used in mechanical actuators, or devices such as DC motors that must maintain a constant speed or torque. An AC servo amp can do this for some ac motors.
- A linear amplifier responds to different frequency components independently, and does not generate harmonic distortion or Intermodulation distortion. No amplifier can provide perfect linearity (even the most linear amplifier has some nonlinearities, since the amplifying devices—transistors or vacuum tubes—follow nonlinear power laws such as square-laws and rely on circuitry techniques to reduce those effects).
- A nonlinear amplifier generates significant distortion and so changes the harmonic content; there are situations where this is useful. Amplifier circuits intentionally providing a non-linear transfer function include:
- a device like a Silicon Controlled Rectifier or a transistor used as a switch may be employed to turn either fully ON or OFF a load such as a lamp based on a threshold in a continuously variable input.
- a non-linear amplifier in an analog computer or true RMS converter for example can provide a special transfer function, such as logarithmic or square-law.
- a Class C RF amplifier may be chosen because it can be very efficient—but is non-linear. Following such an amplifier with a "tank" tuned circuit can reduce unwanted harmonics (distortion) sufficiently to make it useful in transmitters, or some desired harmonic may be selected by setting the resonant frequency of the tuned circuit to a higher frequency rather than fundamental frequency in frequency multiplier circuits.
- Automatic gain control circuits require an amplifier's gain be controlled by the time-averaged amplitude so that the output amplitude varies little when weak stations are being received. The non-linearities are assumed arranged so the relatively small signal amplitude suffers from little distortion (cross-channel interference or intermodulation) yet is still modulated by the relatively large gain-control DC voltage.
- AM detector circuits that use amplification such as Anode-bend detectors, Precision rectifiers and Infinite impedance detectors (so excluding unamplified detectors such as Cat's-whisker detectors), as well as peak detector circuits, rely on changes in amplification based on the signal's instantaneous amplitude to derive a direct current from an alternating current input.
- Operational amplifier comparator and detector circuits.
- A wideband amplifier has a precise amplification factor over a wide frequency range, and is often used to boost signals for relay in communications systems. A narrowband amp amplifies a specific narrow range of frequencies, to the exclusion of other frequencies.
- An RF amplifier amplifies signals in the radio frequency range of the electromagnetic spectrum, and is often used to increase the sensitivity of a receiver or the output power of a transmitter.
- An audio amplifier amplifies audio frequencies. This category subdivides into small signal amplification, and power amps that are optimised to driving speakers, sometimes with multiple amps grouped together as separate or bridgeable channels to accommodate different audio reproduction requirements. Frequently used terms within audio amplifiers include:
- Preamplifier (preamp), which may include a phono preamp with RIAA equalization, or tape head preamps with CCIR equalisation filters. They may include filters or tone control circuitry.
- Power amplifier (normally drives loudspeakers), headphone amplifiers, and public address amplifiers.
- Stereo amplifiers imply two channels of output (left and right), though the term simply means "solid" sound (referring to three-dimensional)—so quadraphonic stereo was used for amplifiers with four channels. 5.1 and 7.1 systems refer to Home theatre systems with 5 or 7 normal spacial channels, plus a subwoofer channel.
- Buffer amplifiers, which may include emitter followers, provide a high impedance input for a device (perhaps another amplifier, or perhaps an energy-hungry load such as lights) that would otherwise draw too much current from the source. Line drivers are a type of buffer that feeds long or interference-prone interconnect cables, possibly with differential outputs through twisted pair cables.
- A special type of amplifier - originally used in analog computers - is widely used in measuring instruments for signal processing, and many other uses. These are called operational amplifiers or op-amps. The "operational" name is because this type of amplifier can be used in circuits that perform mathematical algorithmic functions, or "operations" on input signals to obtain specific types of output signals. Modern op-amps are usually provided as integrated circuits, rather than constructed from discrete components. A typical modern op-amp has differential inputs (one "inverting", one "non-inverting") and one output. An idealised op-amp has the following characteristics:
- Infinite input impedance (so it does not load the circuitry at its input)
- Zero output impedance
- Infinite gain
- Zero propagation delay
The performance of an op-amp with these characteristics is entirely defined by the (usually passive) components that form a negative feedback loop around it. The amplifier itself does not affect the output. All real-world op-amps fall short of the idealised specification above—but some modern components have remarkable performance and come close in some respects.
Interstage coupling method
Amplifiers are sometimes classified by the coupling method of the signal at the input, output, or between stages. Different types of these include:
- Resistive-capacitive (RC) coupled amplifier, using a network of resistors and capacitors
- By design these amplifiers cannot amplify DC signals as the capacitors block the DC component of the input signal. RC-coupled amplifiers were used very often in circuits with vacuum tubes or discrete transistors. In the days of the integrated circuit a few more transistors on a chip are much cheaper and smaller than a capacitor.
- Inductive-capacitive (LC) coupled amplifier, using a network of inductors and capacitors
- This kind of amplifier is most often used in selective radio-frequency circuits.
- Transformer coupled amplifier, using a transformer to match impedances or to decouple parts of the circuits
- Quite often LC-coupled and transformer-coupled amplifiers cannot be distinguished as a transformer is some kind of inductor.
- Direct coupled amplifier, using no impedance and bias matching components
- This class of amplifier was very uncommon in the vacuum tube days when the anode (output) voltage was at greater than several hundred volts and the grid (input) voltage at a few volts minus. So they were only used if the gain was specified down to DC (e.g., in an oscilloscope). In the context of modern electronics developers are encouraged to use directly coupled amplifiers whenever possible.
Depending on the frequency range and other properties amplifiers are designed according to different principles.
- Frequency ranges down to DC are only used when this property is needed. DC amplification leads to specific complications that are avoided if possible; DC-blocking capacitors can be added to remove DC and sub-sonic frequencies from audio amplifiers.
- Depending on the frequency range specified different design principles must be used. Up to the MHz range only "discrete" properties need be considered; e.g., a terminal has an input impedance.
- As soon as any connection within the circuit gets longer than perhaps 1% of the wavelength of the highest specified frequency (e.g., at 100 MHz the wavelength is 3 m, so the critical connection length is approx. 3 cm) design properties radically change. For example, a specified length and width of a PCB trace can be used as a selective or impedance-matching entity.
- Above a few hundred MHz, it gets difficult to use discrete elements, especially inductors. In most cases, PCB traces of very closely defined shapes are used instead.
The frequency range handled by an amplifier might be specified in terms of bandwidth (normally implying a response that is 3 dB down when the frequency reaches the specified bandwidth), or by specifying a frequency response that is within a certain number of decibels between a lower and an upper frequency (e.g. "20 Hz to 20 kHz plus or minus 1 dB").
Power amplifier classes
Power amplifier circuits (output stages) are classified as A, B, AB and C for analog designs—and class D and E for switching designs based on the proportion of each input cycle (conduction angle), during which an amplifying device passes current. The image of the conduction angle derives from amplifying a sinusoidal signal. If the device is always on, the conducting angle is 360°. If it is on for only half of each cycle, the angle is 180°. The angle of flow is closely related to the amplifier power efficiency. The various classes are introduced below, followed by a more detailed discussion under their individual headings further down.
Conduction angle classes
- Class A
- 100% of the input signal is used (conduction angle Θ = 360°). The active element remains conducting all of the time.
- Class B
- 50% of the input signal is used (Θ = 180°); the active element carries current half of each cycle, and is turned off for the other half.
- Class AB
- Class AB is intermediate between class A and B, the two active elements conduct more than half of the time
- Class C
- Less than 50% of the input signal is used (conduction angle Θ < 180°).
A "Class D" amplifier uses some form of pulse-width modulation to control the output devices; the conduction angle of each device is no longer related directly to the input signal but instead varies in pulse width. These are sometimes called "digital" amplifiers because the output device is switched fully on or off, and not carrying current proportional to the signal amplitude.
- Additional classes
- There are several other amplifier classes, although they are mainly variations of the previous classes. For example, class-G and class-H amplifiers are marked by variation of the supply rails (in discrete steps or in a continuous fashion, respectively) following the input signal. Wasted heat on the output devices can be reduced as excess voltage is kept to a minimum. The amplifier that is fed with these rails itself can be of any class. These kinds of amplifiers are more complex, and are mainly used for specialized applications, such as very high-power units. Also, class-E and class-F amplifiers are commonly described in literature for radio-frequency applications where efficiency of the traditional classes is important, yet several aspects deviate substantially from their ideal values. These classes use harmonic tuning of their output networks to achieve higher efficiency and can be considered a subset of class C due to their conduction-angle characteristics.
Amplifying devices operating in class A conduct over the entire range of the input cycle. A class-A amplifier is distinguished by the output stage devices being biased for class A operation. Subclass A2 is sometimes used to refer to vacuum-tube class-A stages that drive the grid slightly positive on signal peaks for slightly more power than normal class A (A1; where the grid is always negative). This, however, incurs higher signal distortion.
Advantages of class-A amplifiers
- Class-A designs are simpler than other classes; for example class -AB and -B designs require two connected devices in the circuit (push–pull output), each to handle one half of the waveform; class A can use a single device (single-ended).
- The amplifying element is biased so the device is always conducting, the quiescent (small-signal) collector current (for transistors; drain current for FETs or anode/plate current for vacuum tubes) is close to the most linear portion of its transconductance curve.
- Because the device is never 'off' there is no "turn on" time, no problems with charge storage, and generally better high frequency performance and feedback loop stability (and usually fewer high-order harmonics).
- The point where the device comes closest to being 'off' is not at 'zero signal', so the problems of crossover distortion associated with class-AB and -B designs is avoided.
- Best for low signal levels of radio receivers due to low distortion.
Disadvantage of class-A amplifiers
- Class-A amplifiers are inefficient. A theoretical efficiency of 50% is obtainable in a push-pull topology, and only 25% in a single-ended topology, unless deliberate use of nonlinearities is made (such as in square-law output stages). In a power amplifier, this not only wastes power and limits operation with batteries, but increases operating costs and requires higher-rated output devices. Inefficiency comes from the standing current that must be roughly half the maximum output current, and a large part of the power supply voltage is present across the output device at low signal levels. If high output power is needed from a class-A circuit, the power supply and accompanying heat becomes significant. For every watt delivered to the load, the amplifier itself, at best, uses an extra watt. For high power amplifiers this means very large and expensive power supplies and heat sinks.
Class-A power amplifier designs have largely been superseded by more efficient designs, though they remain popular with some hobbyists, mostly for their simplicity. There is a market for expensive high fidelity class-A amps considered a "cult item" among audiophiles mainly for their absence of crossover distortion and reduced odd-harmonic and high-order harmonic distortion.
Single-ended and triode class-A amplifiers
Some hobbyists who prefer class-A amplifiers also prefer the use of thermionic valve (or "tube") designs instead of transistors, for several reasons:
- Single-ended output stages have an asymmetrical transfer function, meaning that even-order harmonics in the created distortion tend to not cancel out (as they do in push–pull output stages). For tubes, or FETs, most distortion is second-order harmonics, from the square law transfer characteristic, which to some produces a "warmer" and more pleasant sound.
- For those who prefer low distortion figures, the use of tubes with class A (generating little odd-harmonic distortion, as mentioned above) together with symmetrical circuits (such as push–pull output stages, or balanced low-level stages) results in the cancellation of most of the even distortion harmonics, hence the removal of most of the distortion.
- Historically, valve amplifiers often used a class-A power amplifier simply because valves are large and expensive; many class-A designs use only a single device.
Transistors are much cheaper, and so more elaborate designs that give greater efficiency but use more parts are still cost-effective. A classic application for a pair of class-A devices is the long-tailed pair, which is exceptionally linear, and forms the basis of many more complex circuits, including many audio amplifiers and almost all op-amps.
Class-A amplifiers are often used in output stages of high quality op-amps (although the accuracy of the bias in low cost op-amps such as the 741 may result in class A or class AB or class B performance, varying from device to device or with temperature). They are sometimes used as medium-power, low-efficiency, and high-cost audio power amplifiers. The power consumption is unrelated to the output power. At idle (no input), the power consumption is essentially the same as at high output volume. The result is low efficiency and high heat dissipation.
Class-B amplifiers only amplify half of the input wave cycle, thus creating a large amount of distortion, but their efficiency is greatly improved and is much better than class A. Class-B amplifiers are also favoured in battery-operated devices, such as transistor radios. Class B has a maximum theoretical efficiency of π/4 (≈ 78.5%). This is because the amplifying element is switched off altogether half of the time, and so cannot dissipate power. A single class-B element is rarely found in practice, though it has been used for driving the loudspeaker in the early IBM Personal Computers with beeps, and it can be used in RF power amplifier where the distortion levels are less important. However, class C is more commonly used for this.
A practical circuit using class-B elements is the push–pull stage, such as the very simplified complementary pair arrangement shown below. Here, complementary or quasi-complementary devices are each used for amplifying the opposite halves of the input signal, which is then recombined at the output. This arrangement gives excellent efficiency, but can suffer from the drawback that there is a small mismatch in the cross-over region – at the "joins" between the two halves of the signal, as one output device has to take over supplying power exactly as the other finishes. This is called crossover distortion. An improvement is to bias the devices so they are not completely off when they are not in use. This approach is called class AB operation.
Class B amplifiers offer higher efficiency than class A amplifier using a single active device.
Class AB is widely considered a good compromise for amplifiers, since much of the time the music signal is quiet enough that the signal stays in the "class A" region, where it is amplified with good fidelity, and by definition if passing out of this region, is large enough that the distortion products typical of class B are relatively small. The crossover distortion can be reduced further by using negative feedback.
In class-AB operation, each device operates the same way as in class B over half the waveform, but also conducts a small amount on the other half. As a result, the region where both devices simultaneously are nearly off (the "dead zone") is reduced. The result is that when the waveforms from the two devices are combined, the crossover is greatly minimised or eliminated altogether. The exact choice of quiescent current (the standing current through both devices when there is no signal) makes a large difference to the level of distortion (and to the risk of thermal runaway, that may damage the devices). Often, bias voltage applied to set this quiescent current must be adjusted with the temperature of the output transistors. (For example, in the circuit at the beginning of the article, the diodes would be mounted physically close to the output transistors, and specified to have a matched temperature coefficient.) Another approach (often used with thermally tracking bias voltages) is to include small value resistors in series with the emitters.
Class AB sacrifices some efficiency over class B in favor of linearity, thus is less efficient (below 78.5% for full-amplitude sinewaves in transistor amplifiers, typically; much less is common in class-AB vacuum-tube amplifiers). It is typically much more efficient than class A.
Sometimes a numeral is added for vacuum-tube stages. If grid current is not permitted to flow, the class is AB1. If grid current is allowed to flow (adding more distortion, but giving slightly higher output power) the class is AB2.
Class-C amplifiers conduct less than 50% of the input signal and the distortion at the output is high, but high efficiencies (up to 90%) are possible. The usual application for class-C amplifiers is in RF transmitters operating at a single fixed carrier frequency, where the distortion is controlled by a tuned load on the amplifier. The input signal is used to switch the active device causing pulses of current to flow through a tuned circuit forming part of the load.
The class-C amplifier has two modes of operation: tuned and untuned. The diagram shows a waveform from a simple class-C circuit without the tuned load. This is called untuned operation, and the analysis of the waveforms shows the massive distortion that appears in the signal. When the proper load (e.g., an inductive-capacitive filter plus a load resistor) is used, two things happen. The first is that the output's bias level is clamped with the average output voltage equal to the supply voltage. This is why tuned operation is sometimes called a clamper. This restores the waveform to its proper shape, despite the amplifier having only a one-polarity supply. This is directly related to the second phenomenon: the waveform on the center frequency becomes less distorted. The residual distortion is dependent upon the bandwidth of the tuned load, with the center frequency seeing very little distortion, but greater attenuation the farther from the tuned frequency that the signal gets.
The tuned circuit resonates at one frequency, the fixed carrier frequency, and so the unwanted frequencies are suppressed, and the wanted full signal (sine wave) is extracted by the tuned load. The signal bandwidth of the amplifier is limited by the Q-factor of the tuned circuit but this is not a serious limitation. Any residual harmonics can be removed using a further filter.
In practical class-C amplifiers a tuned load is invariably used. In one common arrangement the resistor shown in the circuit above is replaced with a parallel-tuned circuit consisting of an inductor and capacitor in parallel, whose components are chosen to resonate the frequency of the input signal. Power can be coupled to a load by transformer action with a secondary coil wound on the inductor. The average voltage at the drain is then equal to the supply voltage, and the signal voltage appearing across the tuned circuit varies from near zero to near twice the supply voltage during the rf cycle. The input circuit is biased so that the active element (e.g., transistor) conducts for only a fraction of the RF cycle, usually one third (120 degrees) or less.
The active element conducts only while the drain voltage is passing through its minimum. By this means, power dissipation in the active device is minimised, and efficiency increased. Ideally, the active element would pass only an instantaneous current pulse while the voltage across it is zero: it then dissipates no power and 100% efficiency is achieved. However practical devices have a limit to the peak current they can pass, and the pulse must therefore be widened, to around 120 degrees, to obtain a reasonable amount of power, and the efficiency is then 60–70%.
In the class-D amplifier the active devices (transistors) function as electronic switches instead of linear gain devices; they are either on or off. The analog signal is converted to a stream of pulses that represents the signal by pulse width modulation, pulse density modulation, delta-sigma modulation or a related modulation technique before being applied to the amplifier. The time average power value of the pulses is directly proportional to the analog signal, so after amplification the signal can be converted back to an analog signal by a passive low-pass filter. The purpose of the output filter is to smooth the pulse stream to an analog signal, removing the high frequency spectral components of the pulses. The frequency of the output pulses is typically ten or more times the highest frequency in the input signal to amplify, so that the filter can adequately reduce the unwanted harmonics and accurately reproduce the input.
The main advantage of a class-D amplifier is power efficiency. Because the output pulses have a fixed amplitude, the switching elements (usually MOSFETs, but vacuum tubes, and at one time bipolar transistors, were used) are switched either completely on or completely off, rather than operated in linear mode. A MOSFET operates with the lowest resistance when fully on and thus (excluding when fully off) has the lowest power dissipation when in that condition. Compared to an equivalent class-AB device, a class-D amplifier's lower losses permit the use of a smaller heat sink for the MOSFETs while also reducing the amount of input power required, allowing for a lower-capacity power supply design. Therefore, class-D amplifiers are typically smaller than an equivalent class-AB amplifier.
Another advantage of the class-D amplifier is that it can operate from a digital signal source without requiring a digital-to-analog converter (DAC) to convert the signal to analog form first. If the signal source is in digital form, such as in a digital media player or computer sound card, the digital circuitry can convert the binary digital signal directly to a pulse width modulation signal that is applied to the amplifier, simplifying the circuitry considerably.
Class-D amplifiers are widely used to control motors—but are now also used as power amplifiers, with extra circuitry that converts analogue to a much higher frequency pulse width modulated signal. Switching power supplies have even been modified into crude class-D amplifiers (though typically these only reproduce low-frequencies with acceptable accuracy).
High quality class-D audio power amplifiers have now appeared on the market. These designs have been said to rival traditional AB amplifiers in terms of quality. An early use of class-D amplifiers was high-power subwoofer amplifiers in cars. Because subwoofers are generally limited to a bandwidth of no higher than 150 Hz, switching speed for the amplifier does not have to be as high as for a full range amplifier, allowing simpler designs. Class-D amplifiers for driving subwoofers are relatively inexpensive in comparison to class-AB amplifiers.
The letter D used to designate this amplifier class is simply the next letter after C and, although occasionally used as such, does not stand for digital. Class-D and class-E amplifiers are sometimes mistakenly described as "digital" because the output waveform superficially resembles a pulse-train of digital symbols, but a class-D amplifier merely converts an input waveform into a continuously pulse-width modulated analog signal. (A digital waveform would be pulse-code modulated.)
The class-E/F amplifier is a highly efficient switching power amplifier, typically used at such high frequencies that the switching time becomes comparable to the duty time. As said in the class-D amplifier, the transistor is connected via a serial LC circuit to the load, and connected via a large L (inductor) to the supply voltage. The supply voltage is connected to ground via a large capacitor to prevent any RF signals leaking into the supply. The class-E amplifier adds a C (capacitor) between the transistor and ground and uses a defined L1 to connect to the supply voltage.
The following description ignores DC, which can be added easily afterwards. The above-mentioned C and L are in effect a parallel LC circuit to ground. When the transistor is on, it pushes through the serial LC circuit into the load and some current begins to flow to the parallel LC circuit to ground. Then the serial LC circuit swings back and compensates the current into the parallel LC circuit. At this point the current through the transistor is zero and it is switched off. Both LC circuits are now filled with energy in C and L0. The whole circuit performs a damped oscillation. The damping by the load has been adjusted so that some time later the energy from the Ls is gone into the load, but the energy in both C0 peaks at the original value to in turn restore the original voltage so that the voltage across the transistor is zero again and it can be switched on.
With load, frequency, and duty cycle (0.5) as given parameters and the constraint that the voltage is not only restored, but peaks at the original voltage, the four parameters (L, L0, C and C0) are determined. The class-E amplifier takes the finite on resistance into account and tries to make the current touch the bottom at zero. This means that the voltage and the current at the transistor are symmetric with respect to time. The Fourier transform allows an elegant formulation to generate the complicated LC networks and says that the first harmonic is passed into the load, all even harmonics are shorted and all higher odd harmonics are open.
Class E uses a significant amount of second-harmonic voltage. The second harmonic can be used to reduce the overlap with edges with finite sharpness. For this to work, energy on the second harmonic has to flow from the load into the transistor, and no source for this is visible in the circuit diagram. In reality, the impedance is mostly reactive and the only reason for it is that class E is a class F (see below) amplifier with a much simplified load network and thus has to deal with imperfections.
In many amateur simulations of class-E amplifiers, sharp current edges are assumed nullifying the very motivation for class E and measurements near the transit frequency of the transistors show very symmetric curves, which look much similar to class-F simulations.
The class-E amplifier was invented in 1972 by Nathan O. Sokal and Alan D. Sokal, and details were first published in 1975. Some earlier reports on this operating class have been published in Russian.
In push–pull amplifiers and in CMOS, the even harmonics of both transistors just cancel. Experiment shows that a square wave can be generated by those amplifiers. Theoretically square waves consist of odd harmonics only. In a class-D amplifier, the output filter blocks all harmonics; i.e., the harmonics see an open load. So even small currents in the harmonics suffice to generate a voltage square wave. The current is in phase with the voltage applied to the filter, but the voltage across the transistors is out of phase. Therefore, there is a minimal overlap between current through the transistors and voltage across the transistors. The sharper the edges, the lower the overlap.
While in class D, transistors and the load exist as two separate modules, class F admits imperfections like the parasitics of the transistor and tries to optimise the global system to have a high impedance at the harmonics. Of course there must be a finite voltage across the transistor to push the current across the on-state resistance. Because the combined current through both transistors is mostly in the first harmonic, it looks like a sine. That means that in the middle of the square the maximum of current has to flow, so it may make sense to have a dip in the square or in other words to allow some overswing of the voltage square wave. A class-F load network by definition has to transmit below a cutoff frequency and reflect above.
Any frequency lying below the cutoff and having its second harmonic above the cutoff can be amplified, that is an octave bandwidth. On the other hand, an inductive-capacitive series circuit with a large inductance and a tunable capacitance may be simpler to implement. By reducing the duty cycle below 0.5, the output amplitude can be modulated. The voltage square waveform degrades, but any overheating is compensated by the lower overall power flowing. Any load mismatch behind the filter can only act on the first harmonic current waveform, clearly only a purely resistive load makes sense, then the lower the resistance, the higher the current.
Class F can be driven by sine or by a square wave, for a sine the input can be tuned by an inductor to increase gain. If class F is implemented with a single transistor, the filter is complicated to short the even harmonics. All previous designs use sharp edges to minimise the overlap.
Classes G and H
|This section needs additional citations for verification. (June 2014)|
There is a variety of amplifier designs that enhance class-AB output stages with more efficient techniques to achieve greater efficiencies with low distortion. These designs are common in large audio amplifiers since the heatsinks and power transformers would be prohibitively large (and costly) without the efficiency increases. The terms "class G" and "class H" are used interchangeably to refer to different designs, varying in definition from one manufacturer or paper to another.
Class-G amplifiers (which use "rail switching" to decrease power consumption and increase efficiency) are more efficient than class-AB amplifiers. These amplifiers provide several power rails at different voltages and switch between them as the signal output approaches each level. Thus, the amplifier increases efficiency by reducing the wasted power at the output transistors. Class-G amplifiers are more efficient than class AB but less efficient when compared to class D, however, they do not have the electromagnetic interference effects of class D.
Class-H amplifiers take the idea of class G one step further creating an infinitely variable supply rail. This is done by modulating the supply rails so that the rails are only a few volts larger than the output signal at any given time. The output stage operates at its maximum efficiency all the time. Switched-mode power supplies can be used to create the tracking rails. Significant efficiency gains can be achieved but with the drawback of more complicated supply design and reduced THD performance. In common designs, a voltage drop of about 10V is maintained over the output transistors in Class H circuits. The picture above shows positive supply voltage of the output stage and the voltage at the speaker output. The boost of the supply voltage is shown for a real music signal.
The voltage signal shown is thus a larger version of the input, but has been changed in sign (inverted) by the amplification. Other arrangements of amplifying device are possible, but that given (that is, common emitter, common source or common cathode) is the easiest to understand and employ in practice. If the amplifying element is linear, the output is a faithful copy of the input, only larger and inverted. In practice, transistors are not linear, and the output only approximates the input. nonlinearity from any of several sources is the origin of distortion within an amplifier. The class of amplifier (A, B, AB or C) depends on how the amplifying device is biased. The diagrams omit the bias circuits for clarity.
Any real amplifier is an imperfect realization of an ideal amplifier. An important limitation of a real amplifier is that the output it generates is ultimately limited by the power available from the power supply. An amplifier saturates and clips the output if the input signal becomes too large for the amplifier to reproduce or exceeds operational limits for the device.
The Doherty, a hybrid configuration, is currently receiving renewed attention. It was invented in 1934 by William H. Doherty for Bell Laboratories—whose sister company, Western Electric, manufactured radio transmitters. The Doherty amplifier consists of a class-B primary or carrier stages in parallel with a class-C auxiliary or peak stage. The input signal splits to drive the two amplifiers, and a combining network sums the two output signals. Phase shifting networks are used in inputs and outputs. During periods of low signal level, the class-B amplifier efficiently operates on the signal and the class-C amplifier is cutoff and consumes little power. During periods of high signal level, the class-B amplifier delivers its maximum power and the class-C amplifier delivers up to its maximum power. The efficiency of previous AM transmitter designs was proportional to modulation but, with average modulation typically around 20%, transmitters were limited to less than 50% efficiency. In Doherty's design, even with zero modulation, a transmitter could achieve at least 60% efficiency.
As a successor to Western Electric for broadcast transmitters, the Doherty concept was considerably refined by Continental Electronics Manufacturing Company of Dallas, TX. Perhaps, the ultimate refinement was the screen-grid modulation scheme invented by Joseph B. Sainton. The Sainton amplifier consists of a class-C primary or carrier stage in parallel with a class-C auxiliary or peak stage. The stages are split and combined through 90-degree phase shifting networks as in the Doherty amplifier. The unmodulated radio frequency carrier is applied to the control grids of both tubes. Carrier modulation is applied to the screen grids of both tubes. The bias point of the carrier and peak tubes is different, and is established such that the peak tube is cutoff when modulation is absent (and the amplifier is producing rated unmodulated carrier power) whereas both tubes contribute twice the rated carrier power during 100% modulation (as four times the carrier power is required to achieve 100% modulation). As both tubes operate in class C, a significant improvement in efficiency is thereby achieved in the final stage. In addition, as the tetrode carrier and peak tubes require very little drive power, a significant improvement in efficiency within the driver stage is achieved as well (317C, et al.). The released version of the Sainton amplifier employs a cathode-follower modulator, not a push–pull modulator. Previous Continental Electronics designs, by James O. Weldon and others, retained most of the characteristics of the Doherty amplifier but added screen-grid modulation of the driver (317B, et al.).
The Doherty amplifier remains in use in very-high-power AM transmitters, but for lower-power AM transmitters, vacuum-tube amplifiers in general were eclipsed in the 1980s by arrays of solid-state amplifiers, which could be switched on and off with much finer granularity in response to the requirements of the input audio. However, interest in the Doherty configuration has been revived by cellular-telephone and wireless-Internet applications where the sum of several constant envelope users creates an aggregate AM result. The main challenge of the Doherty amplifier for digital transmission modes is in aligning the two stages and getting the class-C amplifier to turn on and off very quickly.
Recently, Doherty amplifiers have found widespread use in cellular base station transmitters for GHz frequencies. Implementations for transmitters in mobile devices have also been demonstrated.
Amplifiers are implemented using active elements of different kinds:
- The first active elements were relays. They were for example used in transcontinental telegraph lines: a weak current was used to switch the voltage of a battery to the outgoing line.
- For transmitting audio, carbon microphones were used as the active element. This was used to modulate a radio-frequency source in one of the first AM audio transmissions, by Reginald Fessenden on Dec. 24, 1906.
- Power control circuitry used magnetic amplifiers until the latter half of the twentieth century when high power FETs, and their easy interfacing to the newly developed digital circuitry, took over.
- Audio and most low power amplifiers used vacuum tubes exclusively until the 1960s. Today, tubes are used for specialist audio applications such as guitar amplifiers and audiophile amplifiers. Many broadcast transmitters still use vacuum tubes.
- In the 1960s, the transistor started to take over. These days, discrete transistors are still used in high-power amplifiers and in specialist audio devices.
- Beginning in the 1970s, more and more transistors were connected on a single chip therefore creating the integrated circuit. A large number of amplifiers commercially available today are based on integrated circuits.
For special purposes, other active elements have been used. For example, in the early days of the satellite communication, parametric amplifiers were used. The core circuit was a diode whose capacitance was changed by an RF signal created locally. Under certain conditions, this RF signal provided energy that was modulated by the extremely weak satellite signal received at the earth station.
The practical amplifier circuit to the right could be the basis for a moderate-power audio amplifier. It features a typical (though substantially simplified) design as found in modern amplifiers, with a class-AB push–pull output stage, and uses some overall negative feedback. Bipolar transistors are shown, but this design would also be realizable with FETs or valves.
The input signal is coupled through capacitor C1 to the base of transistor Q1. The capacitor allows the AC signal to pass, but blocks the DC bias voltage established by resistors R1 and R2 so that any preceding circuit is not affected by it. Q1 and Q2 form a differential amplifier (an amplifier that multiplies the difference between two inputs by some constant), in an arrangement known as a long-tailed pair. This arrangement is used to conveniently allow the use of negative feedback, which is fed from the output to Q2 via R7 and R8.
The negative feedback into the difference amplifier allows the amplifier to compare the input to the actual output. The amplified signal from Q1 is directly fed to the second stage, Q3, which is a common emitter stage that provides further amplification of the signal and the DC bias for the output stages, Q4 and Q5. R6 provides the load for Q3 (a better design would probably use some form of active load here, such as a constant-current sink). So far, all of the amplifier is operating in class A. The output pair are arranged in class-AB push–pull, also called a complementary pair. They provide the majority of the current amplification (while consuming low quiescent current) and directly drive the load, connected via DC-blocking capacitor C2. The diodes D1 and D2 provide a small amount of constant voltage bias for the output pair, just biasing them into the conducting state so that crossover distortion is minimized. That is, the diodes push the output stage firmly into class-AB mode (assuming that the base-emitter drop of the output transistors is reduced by heat dissipation).
This design is simple, but a good basis for a practical design because it automatically stabilises its operating point, since feedback internally operates from DC up through the audio range and beyond. Further circuit elements would probably be found in a real design that would roll-off the frequency response above the needed range to prevent the possibility of unwanted oscillation. Also, the use of fixed diode bias as shown here can cause problems if the diodes are not both electrically and thermally matched to the output transistors – if the output transistors turn on too much, they can easily overheat and destroy themselves, as the full current from the power supply is not limited at this stage.
A common solution to help stabilise the output devices is to include some emitter resistors, typically one ohm or so. Calculating the values of the circuit's resistors and capacitors is done based on the components employed and the intended use of the amp.
Two most common circuits:
- A Cascode amplifier is a two-stage circuit consisting of a transconductance amplifier followed by a buffer amplifier.
- A Log amplifier is a linear circuit in which output voltage is a constant times the natural logarithm of input.
For the basics of radio frequency amplifiers using valves, see Valved RF amplifiers.
Notes on implementation
Real world amplifiers are imperfect.
- The power supply may influence the output, so must be considered in the design.
- A power amplifier is effectively an input signal controlled power regulator. It regulates the power sourced from the power supply or mains to the amplifier's load. The power output from a power amplifier cannot exceed the power input to it.
- The amplifier circuit has an "open loop" performance. This is described by various parameters (gain, slew rate, output impedance, distortion, bandwidth, signal to noise ratio, etc.).
- Many modern amplifiers use negative feedback techniques to hold the gain at the desired value and reduce distortion. Negative loop feedback has the intended effect of electrically damping loudspeaker motion, thereby damping the mechanical dynamic performance of the loudspeaker.
- When assessing rated amplifier power output, it is useful to consider the applied load, the signal type (e.g., speech or music), required power output duration (i.e., short-time or continuous), and required dynamic range (e.g., recorded or live audio).
- In high-powered audio applications that require long cables to the load (e.g., cinemas and shopping centres) it may be more efficient to connect to the load at line output voltage, with matching transformers at source and loads. This avoids long runs of heavy speaker cables.
- Prevent instability or overheating requires care to ensure solid state amplifiers are adequately loaded. Most have a rated minimum load impedance.
- A summing circuit is typical in applications that must combine many inputs or channels to form a composite output. It is best to combine multiple channels for this.
- All amplifiers generate heat through electrical losses. The amplifier must dissipate this heat via convection or forced air cooling. Heat can damage or reduce electronic component service life. Designers and installers must also consider heating effects on adjacent equipment.
Different power supply types result in many different methods of bias. Bias is a technique by which active devices are set to operate in a particular region, or by which the DC component of the output signal is set to the midpoint between the maximum voltages available from the power supply. Most amplifiers use several devices at each stage; they are typically matched in specifications except for polarity. Matched inverted polarity devices are called complementary pairs. Class-A amplifiers generally use only one device, unless the power supply is set to provide both positive and negative voltages, in which case a dual device symmetrical design may be used. Class-C amplifiers, by definition, use a single polarity supply.
Amplifiers often have multiple stages in cascade to increase gain. Each stage of these designs may be a different type of amp to suit the needs of that stage. For instance, the first stage might be a class-A stage, feeding a class-AB push–pull second stage, which then drives a class-G final output stage, taking advantage of the strengths of each type, while minimizing their weaknesses.
- Charge transfer amplifier
- Distributed amplifier
- Faithful amplification
- Guitar amplifier
- Instrument amplifier
- Instrumentation amplifier
- Low noise amplifier
- Magnetic amplifier
- Negative feedback amplifier
- Operational amplifier
- Optical amplifier
- Power added efficiency
- Programmable gain amplifier
- RF power amplifier
- Valve audio amplifier
- Patronis, Gene (1987). "Amplifiers". In Glen Ballou. Handbook for Sound Engineers: The New Audio Cyclopedia. Howard W. Sams & Co. p. 493. ISBN 0-672-21983-2.
- Harper, Douglas (2001). "Amplify". Online Etymology Dictionary. Etymonline.com. Retrieved July 10, 2015.
- Robert Boylestad and Louis Nashelsky (1996). Electronic Devices and Circuit Theory, 7th Edition. Prentice Hall College Division. ISBN 978-0-13-375734-7.
- Mark Cherry, Maxim Engineering journal, volume 62, Amplifier Considerations in Ceramic Speaker Applications, p.3, accessed 2012-10-01
- Robert S. Symons (1998). "Tubes: Still vital after all these years". IEEE Spectrum 35 (4): 52–63. doi:10.1109/6.666962.
- Rood, George. "Music Concerns Seek New Volume With Amplifier". New York Times. Retrieved 23 February 2015.
- "Amplifier Fills Need in Picture: Loud Speaker Only Method Found to Carry Directions During Turmoil". Los Angeles Times.
- It is a curiosity to note that this table is a "Zwicky box"; in particular, it encompasses all possibilities. See Fritz Zwicky.
- John Everett (1992). Vsats: Very Small Aperture Terminals. IET. ISBN 0-86341-200-9.
- Roy, Apratim; Rashid, S. M. S. (5 June 2012). "A power efficient bandwidth regulation technique for a low-noise high-gain RF wideband amplifier". Central European Journal of Engineering 2 (3): 383–391. Bibcode:2012CEJE....2..383R. doi:10.2478/s13531-012-0009-1.
- RCA Receiving Tube Manual, RC-14 (1940) p 12
- ARRL Handbook, 1968; page 65
- Jerry Del Colliano (20 February 2012), Pass Labs XA30.5 Class-A Stereo Amp Reviewed, Home Theater Review, Luxury Publishing Group Inc.
- Ask the Doctors: Tube vs. Solid-State Harmonics
- Volume cranked up in amp debate
- A.P. Malvino, Electronic Principles (2nd Ed.1979. ISBN 0-07-039867-4) p.299.
- Electronic and Radio Engineering, R.P.Terman, McGraw Hill, 1964
- N. O. Sokal and A. D. Sokal, "Class E – A New Class of High-Efficiency Tuned Single-Ended Switching Power Amplifiers", IEEE Journal of Solid-State Circuits, vol. SC-10, pp. 168–176, June 1975. HVK
- US patent 2210028, William H. Doherty, "Amplifier", issued 1940-08-06, assigned to Bell Telephone Laboratories
- US patent 3314034, Joseph B. Sainton, "High Efficiency Amplifier and Push–Pull Modulator", issued 1967-04-11, assigned to Continental Electronics Manufacturing Company
- Lee, Thomas (2004). The Design of CMOS Radio-Frequency Integrated Circuits. New York, NY: Cambridge University Press. p. 8. ISBN 978-0-521-83539-8.
- Malina, Roger. "Visual Art, Sound, Music and Technology".
- Shortess, George. "Interactive Sound Installations Using Microcomputers".
|Wikimedia Commons has media related to Electronic amplifiers.|
- Rane audio's guide to amplifier classes
- Design and analysis of a basic class D amplifier
- Conversion: distortion factor to distortion attenuation and THD
- An alternate topology called the grounded bridge amplifier - pdf
- Contains an explanation of different amplifier classes - pdf
- Reinventing the power amplifier - pdf
- Anatomy of the power amplifier, including information about classes
- Tons of Tones - Site explaining non linear distortion stages in Amplifier Models
- Class D audio amplifiers, white paper - pdf
- Class E Radio Transmitters - Tutorials, Schematics, Examples, and Construction Details | https://en.wikipedia.org/wiki/Amplifier |
4.25 | When you look at an image of Mercury, it looks like a dry, airless world. But you might be surprised to know that Mercury does have an atmosphere. Not the kind of atmosphere that we have here on Earth, or even the thin atmosphere that surrounds Mars. But Mercury’s atmosphere is currently being studied by scientists, and the newly arrived MESSENGER spacecraft.
Mercury’s original atmosphere dissipated shortly after the planet formed 4.6 billion years ago with the rest of the Solar System. This was because of Mercury’s lower gravity, and because it’s so close to the Sun and receives the constant buffeting from its solar wind. Its current atmosphere is almost negligible.
What is Mercury’s atmosphere made of? It has a tenuous atmosphere made up of hydrogen, helium, oxygen, sodium, calcium, potassium and water vapor. Astronomers think this current atmosphere is constantly being replenished by a variety of sources: particles of the Sun’s solar wind, volcanic outgassing, radioactive decay of elements on Mercury’s surface and the dust and debris kicked up by micrometeorites constantly buffeting its surface. Without these sources of replenishment, Mercury’s atmosphere would be carried away by the the solar wind relatively quickly.
Mercury atmospheric composition:
In 2008, NASA’s MESSENGER spacecraft discovered water vapor in Mercury’s atmosphere. It’s thought that this water is created when hydrogen and oxygen atoms meet in the atmosphere.
Two of those components are possible indicators of life as we know it: methane and water vapor(indirectly). Water or water ice is believed to be a necessary component for life. The presence of water vapor in the atmosphere of Mercury indicates that there is water or water ice somewhere on the planet. Evidence of water ice has been found at the poles where the bottoms of craters are never exposed to light. Sometimes, methane is a byproduct of waste from living organisms. The methane in Mercury’s atmosphere is believed to come from volcanism, geothermal processes, and hydrothermal activity. Methane is an unstable gas and requires a constant and very active source, because studies have shown that the methane is destroyed in less than on Earth year. It is thought that it originates from peroxides and perchlorates in the soil or that it condenses and evaporates seasonally from clathrates.
Despite how small the Mercurian atmosphere is, it has been broken down into four components by NASA scientists. Those components are the lower, middle, upper, and exosphere. The lower atmosphere is a warm region(around 210 K). It is warmed by the combination of airborne dust(1.5 micrometers in diameter) and heat radiated from the surface. This airborne dust gives the planet its ruddy brown appearance. The middle atmosphere contains a jetstream like Earth’s. The upper atmosphere is heated by the solar wind and the temperatures are much higher than at the surface. The higher temperatures separate the gases. The exosphere starts at about 200 km and has no clear end. It just tapers off into space. While that may sound like a lot of atmosphere separating the planet from the solar wind and ultraviolet radiation, it is not.
Helping Mercury hold on to its atmosphere is its magnetic field. While gravity helps hold the gases to the surface, the magnetic filed helps to deflect the solar wind around the planet, much like it does here on Earth. This deflection allows a smaller gravitational pull to hold some form of an atmosphere.
The atmosphere of Mercury is one of the most tenuous in the Solar System. The solar wind still blows much of it away, so sources on the planet are constantly replenishing it. Hopefully, the MESSENGER spacecraft will help to discover those sources and increase our knowledge of the innermost planet.
We have written many articles about Mercury’s atmosphere for Universe Today. Here’s an article about how magnetic tornadoes might regenerate Mercury’s atmosphere, and here’s an article about the climate of Mercury.
We have also recorded an entire episode of Astronomy Cast all about atmospheres. Listen here, Episode 151: Atmospheres. | http://www.universetoday.com/22088/atmosphere-of-mercury/ |
4.1875 | In an ellipse the sum of the focal distances is constant; and in an hyperbola the difference of the focal distances is constant.
An oval is never mistaken for a circle, nor an hyperbola for an ellipsis.
But after you have demonstrated to him the properties of the hyperbola and its asymptote, the apparent absurdity vanishes.
The curve is in this case called an hyperbola (see fig. 20).
In the hyperbola we have the mathematical demonstration of the error of an axiom.
Two of the sides of the triangle in this proposition constitute a special form of the hyperbola.
These curves—the ellipse, the parabola, hyperbola—play a large part in the subsequent history of astronomy and mechanics.
The axes of an hyperbola bisect the angles between the asymptotes.
If the cone is cut off vertically on the dotted line, A, the curve is a hyperbola.
With a certain speed it will assume the parabola, and with a greater the hyperbola.
1660s, from Latinized form of Greek hyperbole "extravagance," literally "a throwing beyond" (see hyperbole). Perhaps so called because the inclination of the plane to the base of the cone exceeds that of the side of the cone. | http://dictionary.reference.com/browse/hyperbola |
4.1875 | A hydrothermal vent is a fissure in a planet's surface from which geothermally heated water issues. Hydrothermal vents are commonly found near volcanically active places, areas where tectonic plates are moving apart, ocean basins, and hotspots. Hydrothermal vents exist because the earth is both geologically active and has large amounts of water on its surface and within its crust. Common land types include hot springs, fumaroles and geysers. Under the sea, hydrothermal vents may form features called black smokers. Relative to the majority of the deep sea, the areas around submarine hydrothermal vents are biologically more productive, often hosting complex communities fueled by the chemicals dissolved in the vent fluids. Chemosynthetic bacteria and archaea form the base of the food chain, supporting diverse organisms, including giant tube worms, clams, limpets and shrimp. Active hydrothermal vents are believed to exist on Jupiter's moon Europa, and Saturn's moon Enceladus, and ancient hydrothermal vents have been speculated to exist on Mars.
Hydrothermal vents in the deep ocean typically form along the mid-ocean ridges, such as the East Pacific Rise and the Mid-Atlantic Ridge. These are locations where two tectonic plates are diverging and new crust is being formed.
The water that issues from seafloor hydrothermal vents consists mostly of sea water drawn into the hydrothermal system close to the volcanic edifice through faults and porous sediments or volcanic strata, plus some magmatic water released by the upwelling magma. In terrestrial hydrothermal systems, the majority of water circulated within the fumarole and geyser systems is meteoric water plus ground water that has percolated down into the thermal system from the surface, but it also commonly contains some portion of metamorphic water, magmatic water, and sedimentary formational brine that is released by the magma. The proportion of each varies from location to location.
In contrast to the approximately 2 °C ambient water temperature at these depths, water emerges from these vents at temperatures ranging from 60 to as high as 464 °C. Due to the high hydrostatic pressure at these depths, water may exist in either its liquid form or as a supercritical fluid at such temperatures. The critical point of (pure) water is 375 °C at a pressure of 218 atmospheres. However, introducing salinity into the fluid raises the critical point to higher temperatures and pressures. The critical point of seawater (3.2 wt. % NaCl) is 407 °C and 298.5 bars, corresponding to a depth of ~2960 m below sea level. Accordingly, if a hydrothermal fluid with a salinity of 3.2 wt. % NaCl vents above 407 °C and 298.5 bars, it is supercritical. Furthermore, the salinity of vent fluids have been shown to vary widely due to phase separation in the crust. The critical point for lower salinity fluids is at lower temperature and pressure conditions than that for seawater, but higher than that for pure water. For example, a vent fluid with a 2.24 wt. % NaCl salinity has the critical point at 400 °C and 280.5 bars. Thus, water emerging from the hottest parts of some hydrothermal vents can be a supercritical fluid, possessing physical properties between those of a gas and those of a liquid.
Examples of supercritical venting are found at several sites. Sister Peak (Comfortless Cove Hydrothermal Field, phase-separated, vapor-type fluids. Sustained venting was not found to be supercritical but a brief injection of 464 °C was well above supercritical conditions. A nearby site, Turtle Pits, was found to vent low salinity fluid at 407 °C, which is above the critical point of the fluid at that salinity. A vent site in the Cayman Trough named Beebe, which is the world's deepest known hydrothermal site at ~5000 m below sea level, has shown sustained supercritical venting at 401 °C and 2.3 wt% NaCl., elevation -2996 m), vents low salinity
Although supercritical conditions have been observed at several sites, it is not yet known what significance, if any, supercritical venting has in terms of hydrothermal circulation, mineral deposit formation, geochemical fluxes or biological activity.
The initial stages of a vent chimney begin with the deposition of the mineral anhydrite. Sulfides of copper, iron, and zinc then precipitate in the chimney gaps, making it less porous over the course of time. Vent growths on the order of 30 cm per day have been recorded. An April 2007 exploration of the deep-sea vents off the coast of Fiji found those vents to be a significant source of dissolved iron.
Black smokers and white smokers
Some hydrothermal vents form roughly cylindrical chimney structures. These form from minerals that are dissolved in the vent fluid. When the superheated water contacts the near-freezing sea water, the minerals precipitate out to form particles which add to the height of the stacks. Some of these chimney structures can reach heights of 60 m. An example of such a towering vent was "Godzilla", a structure in the Pacific Ocean near Oregon that rose to 40 m before it fell over in 1996.
A black smoker or sea vent is a type of hydrothermal vent found on the seabed, typically in the abyssal and hadal zones. They appear as black, chimney-like structures that emit a cloud of black material. Black smokers typically emit particles with high levels of sulfur-bearing minerals, or sulfides. Black smokers are formed in fields hundreds of meters wide when superheated water from below Earth's crust comes through the ocean floor. This water is rich in dissolved minerals from the crust, most notably sulfides. When it comes in contact with cold ocean water, many minerals precipitate, forming a black, chimney-like structure around each vent. The deposited metal sulfides can become massive sulfide ore deposits in time.
Black smokers were first discovered in 1977 on the East Pacific Rise by scientists from Scripps Institution of Oceanography. They were observed using a deep submergence vehicle called ALVIN belonging to the Woods Hole Oceanographic Institution. Now, black smokers are known to exist in the Atlantic and Pacific Oceans, at an average depth of 2100 metres. The most northerly black smokers are a cluster of five named Loki's Castle, discovered in 2008 by scientists from the University of Bergen at 73°N, on the Mid-Atlantic Ridge between Greenland and Norway. These black smokers are of interest as they are in a more stable area of the Earth's crust, where tectonic forces are less and consequently fields of hydrothermal vents are less common. The world's deepest known black smokers are located in the Cayman Trough, 5,000 m (3.1 miles) below the ocean's surface.
White smoker vents emit lighter-hued minerals, such as those containing barium, calcium and silicon. These vents also tend to have lower temperature plumes.
Life has traditionally been seen as driven by energy from the sun, but deep-sea organisms have no access to sunlight, so they must depend on nutrients found in the dusty chemical deposits and hydrothermal fluids in which they live. Previously, benthic oceanographers assumed that vent organisms were dependent on marine snow, as deep-sea organisms are. This would leave them dependent on plant life and thus the sun. Some hydrothermal vent organisms do consume this "rain", but with only such a system, life forms would be very sparse. Compared to the surrounding sea floor, however, hydrothermal vent zones have a density of organisms 10,000 to 100,000 times greater.
Hydrothermal vent communities are able to sustain such vast amounts of life because vent organisms depend on chemosynthetic bacteria for food. The water from the hydrothermal vent is rich in dissolved minerals and supports a large population of chemoautotrophic bacteria. These bacteria use sulfur compounds, particularly hydrogen sulfide, a chemical highly toxic to most known organisms, to produce organic material through the process of chemosynthesis.
The ecosystem so formed is reliant upon the continued existence of the hydrothermal vent field as the primary source of energy, which differs from most surface life on Earth, which is based on solar energy. However, although it is often said that these communities exist independently of the sun, some of the organisms are actually dependent upon oxygen produced by photosynthetic organisms, while others are anaerobic.
The chemosynthetic bacteria grow into a thick mat which attracts other organisms, such as amphipods and copepods, which graze upon the bacteria directly. Larger organisms, such as snails, shrimp, crabs, tube worms, fish (especially eelpout, cutthroat eel, ophidiiforms and Symphurus thermophilus), and octopi (notably Vulcanoctopus hydrothermalis), form a food chain of predator and prey relationships above the primary consumers. The main families of organisms found around seafloor vents are annelids, pogonophorans, gastropods, and crustaceans, with large bivalves, vestimentiferan worms, and "eyeless" shrimp making up the bulk of nonmicrobial organisms.
Siboglinid tube worms, which may grow to over 2 m (6.6 ft) tall in the largest species, often form an important part of the community around a hydrothermal vent. They have no mouth or digestive tract, and like parasitic worms, absorb nutrients produced by the bacteria in their tissues. About 285 billion bacteria are found per ounce of tubeworm tissue. Tubeworms have red plumes which contain hemoglobin. Hemoglobin combines with hydrogen sulfide and transfers it to the bacteria living inside the worm. In return, the bacteria nourish the worm with carbon compounds. Two of the species that inhabit a hydrothermal vent are Tevnia jerichonana, and Riftia pachyptila. One discovered community, dubbed "Eel City", consists predominantly of the eel Dysommina rugosa. Though eels are not uncommon, invertebrates typically dominate hydrothermal vents. Eel City is located near Nafanua volcanic cone, American Samoa.
Other examples of the unique fauna which inhabit this ecosystem are the scaly-foot gastropod Crysomallon squamiferum, a species of snail with a foot reinforced by scales made of iron and organic materials, and the Pompeii worm Alvinella pompejana, which is capable of withstanding temperatures up to 80 °C (176 °F).
In 1993, already more than 100 gastropod species were known to occur in hydrothermal vents. Over 300 new species have been discovered at hydrothermal vents, many of them "sister species" to others found in geographically separated vent areas. It has been proposed that before the North American plate overrode the mid-ocean ridge, there was a single biogeographic vent region found in the eastern Pacific. The subsequent barrier to travel began the evolutionary divergence of species in different locations. The examples of convergent evolution seen between distinct hydrothermal vents is seen as major support for the theory of natural selection and of evolution as a whole.
Although life is very sparse at these depths, black smokers are the centers of entire ecosystems. Sunlight is nonexistent, so many organisms – such as archaea and extremophiles – convert the heat, methane, and sulfur compounds provided by black smokers into energy through a process called chemosynthesis. More complex life forms, such as clams and tubeworms, feed on these organisms. The organisms at the base of the food chain also deposit minerals into the base of the black smoker, therefore completing the life cycle.
A species of phototrophic bacterium has been found living near a black smoker off the coast of Mexico at a depth of 2,500 m (8,200 ft). No sunlight penetrates that far into the waters. Instead, the bacteria, part of the Chlorobiaceae family, use the faint glow from the black smoker for photosynthesis. This is the first organism discovered in nature to exclusively use a light other than sunlight for photosynthesis.
New and unusual species are constantly being discovered in the neighborhood of black smokers. The Pompeii worm was found in the 1980s, and a scaly-foot gastropod in 2001 during an expedition to the Indian Ocean's Kairei hydrothermal vent field. The latter uses iron sulfides (pyrite and greigite) for the structure of its dermal sclerites (hardened body parts), instead of calcium carbonate. The extreme pressure of 2500 m of water (approximately 25 megapascals or 250 atmospheres) is thought to play a role in stabilizing iron sulfide for biological purposes. This armor plating probably serves as a defense against the venomous radula (teeth) of predatory snails in that community.
Although the discovery of hydrothermal vents is a relatively recent event in the history of science, the importance of this discovery has given rise to, and supported, new biological and bio-atmospheric theories.
The deep hot biosphere
At the beginning of his 1992 paper The Deep Hot Biosphere, Thomas Gold referred to ocean vents in support of his theory that the lower levels of the earth are rich in living biological material that finds its way to the surface. He further expanded his ideas in the book The Deep Hot Biosphere.
An article on abiogenic hydrocarbon production in the February 2008 issue of Science journal used data from experiments at the Lost City hydrothermal field to report how the abiotic synthesis of low molecular mass hydrocarbons from mantle derived carbon dioxide may occur in the presence of ultramafic rocks, water, and moderate amounts of heat.
Hydrothermal origin of life
Günter Wächtershäuser proposed the iron-sulfur world theory and suggested that life might have originated at hydrothermal vents. Wächtershäuser proposed that an early form of metabolism predated genetics. By metabolism he meant a cycle of chemical reactions that release energy in a form that can be harnessed by other processes.
It has been proposed that amino acid synthesis could have occurred deep in the Earth's crust and that these amino acids were subsequently shot up along with hydrothermal fluids into cooler waters, where lower temperatures and the presence of clay minerals would have fostered the formation of peptides and protocells. This is an attractive hypothesis because of the abundance of CH4 (methane) and NH3 (ammonia) present in hydrothermal vent regions, a condition that was not provided by the Earth's primitive atmosphere. A major limitation to this hypothesis is the lack of stability of organic molecules at high temperatures, but some have suggested that life would have originated outside of the zones of highest temperature. There are numerous species of extremophiles and other organisms currently living immediately around deep-sea vents, suggesting that this is indeed a possible scenario.
Experimental research and computing modeling indicate that the surfaces of mineral particles inside hydrothermal vents have similar catalytic properties to enzymes and are able to create simple organic molecules, such as methanol (CH3OH) and formic acid (HCO2H), out of the dissolved CO2 in the water.
In 1949, a deep water survey reported anomalously hot brines in the central portion of the Red Sea. Later work in the 1960s confirmed the presence of hot, 60 °C (140 °F), saline brines and associated metalliferous muds. The hot solutions were emanating from an active subseafloor rift. The highly saline character of the waters was not hospitable to living organisms. The brines and associated muds are currently under investigation as a source of mineable precious and base metals.
The chemosynthetic ecosystem surrounding submarine hydrothermal vents were discovered along the Galapagos Rift, a spur of the East Pacific Rise, in 1977 by a group of marine geologists led by Richard Von Herzen and Robert Ballard of Woods Hole Oceanographic Institution (WHOI) using the DSV Alvin, an ONR research submersible from WHOI. In 1979, a team of biologists led by J. Frederick Grassle, at the time at WHOI, returned to the same location to investigate the biological communities discovered two year earlier. In that same year, Peter Lonsdale published the first scientific paper on hydrothermal vent life.
In 2005, Neptune Resources NL, a mineral exploration company, applied for and was granted 35,000 km² of exploration rights over the Kermadec Arc in New Zealand's Exclusive Economic Zone to explore for seafloor massive sulfide deposits, a potential new source of lead-zinc-copper sulfides formed from modern hydrothermal vent fields. The discovery of a vent in the Pacific Ocean offshore of Costa Rica, named the Medusa hydrothermal vent field (after the serpent-haired Medusa of Greek mythology), was announced in April 2007. The Ashadze hydrothermal field (13°N on the Mid-Atlantic Ridge, elevation -4200 m) was the deepest known high-temperature hydrothermal field until 2010, when a hydrothermal plume emanating from the Beebe site ( , elevation -5000 m) was detected by a group of scientists from NASA Jet Propulsion Laboratory and Woods Hole Oceanographic Institute. This site is located on the 110 km long, ultraslow spreading Mid-Cayman Rise within the Cayman Trough. On February 21 the deepest known hydrothermal vents were discovered in the Caribbean at a depth of almost 5,000 metres (16,000 ft).
Hydrothermal vents tend to be distributed along the Earth's plate boundaries, although they may also be found at intra-plate locations such as hotspot volcanoes. As of 2009 there were approximately 500 known active submarine hydrothermal vent fields, with about half visually observed at the seafloor and the other half suspected from water column indicators and/or seafloor deposits. The InterRidge program office hosts a global database for the locations of known active submarine hydrothermal vent fields.
Hydrothermal vents, in some instances, have led to the formation of exploitable mineral resources via deposition of seafloor massive sulfide deposits. The Mount Isa orebody located in Queensland, Australia, is an excellent example. Many hydrothermal vents are rich in cobalt, gold, copper, and rare earth metals essential for electronic components.
Recently, mineral exploration companies, driven by the elevated price activity in the base metals sector during the mid-2000s, have turned their attention to extraction of mineral resources from hydrothermal fields on the seafloor. Significant cost reductions are, in theory, possible.
Two companies are currently engaged in the late stages of commencing to mine seafloor massive sulfides. Nautilus Minerals is in the advanced stages of commencing extraction from its Solwarra deposit, in the Bismarck Archipelago, and Neptune Minerals is at an earlier stage with its Rumble II West deposit, located on the Kermadec Arc, near the Kermadec Islands. Both companies are proposing using modified existing technology. Nautilus Minerals, in partnership with Placer Dome (now part of Barrick Gold), succeeded in 2006 in returning over 10 metric tons of mined SMS to the surface using modified drum cutters mounted on an ROV, a world first. Neptune Minerals in 2007 succeeded in recovering SMS sediment samples using a modified oil industry suction pump mounted on an ROV, also a world first.
Potential seafloor mining has environmental impacts including dust plumes from mining machinery affecting filter feeding organisms, collapsing or reopening vents, methane clathrate release, or even sub-oceanic land slides. A large amount of work is currently being engaged in by both the above-mentioned companies to ensure that potential environmental impacts of seafloor mining are well understood and control measures are implemented, before exploitation commences.
Attempts have been made in the past to exploit minerals from the seafloor. The 1960s and 70s saw a great deal of activity (and expenditure) in the recovery of manganese nodules from the abyssal plains, with varying degrees of success. This does demonstrate however that recovery of minerals from the seafloor is possible, and has been possible for some time. Interestingly, mining of manganese nodules served as a cover story for the elaborate attempt by the CIA to raise the sunken Soviet submarine K-129, using the Glomar Explorer, a ship purpose built for the task by Howard Hughes. The operation was known as Project Azorian, and the cover story of seafloor mining of manganese nodules may have served as the impetus to propel other companies to make the attempt.
The conservation of hydrothermal vents has been the subject of sometimes heated discussion in the Oceanographic Community for the last 20 years. It has been pointed out that it may be that those causing the most damage to these fairly rare habitats are scientists. There have been attempts to forge agreements over the behaviour of scientists investigating vent sites but although there is an agreed code of practice there is as yet no formal international and legally binding agreement.
- "Spacecraft Data Suggest Saturn Moon's Ocean May Harbor Hydrothermal Activity". NASA. 11 March 2015. Retrieved 12 March 2015.
- Paine, M. (15 May 2001). "Mars Explorers to Benefit from Australian Research". Space.com.
- Haase, K. M.; et al. (2007). "Young volcanism and related hydrothermal activity at 5°S on the slow-spreading southern Mid-Atlantic Ridge". Geochemistry Geophysics Geosystems 8 (11): Q11002. Bibcode:2007GGG.....811002H. doi:10.1029/2006GC001509.
- Haase, K. M.; et al. (2009). "Fluid compositions and mineralogy of precipitates from Mid Atlantic Ridge hydrothermal vents at 4°48'S". PANGAEA. doi:10.1594/PANGAEA.727454.
- Bischoff, James L; Rosenbauer, Robert J. "Liquid-vapor relations in the critical region of the system NaCl-H2O from 380 to 415°C: A refined determination of the critical point and two-phase boundary of seawater". Geochimica et Cosmochimica Acta 52 (8): 2121–2126. doi:10.1016/0016-7037(88)90192-5.
- Von Damm, K L. "Seafloor Hydrothermal Activity: Black Smoker Chemistry and Chimneys". Annual Review of Earth and Planetary Sciences 18 (1): 173–204. doi:10.1146/annurev.ea.18.050190.001133.
- Webber, A.P.; Murton, B.; Roberts, S.; Hodgkinson, M. "Supercritical Venting and VMS Formation at the Beebe Hydrothermal Field, Cayman Spreading Centre". Goldschmidt Conference Abstracts 2014. Geochemical Society. Retrieved 29 July 2014.
- Tivey, M. K. (1 December 1998). "How to Build a Black Smoker Chimney: The Formation of Mineral Deposits At Mid-Ocean Ridges". Woods Hole Oceanographic Institution. Retrieved 2006-07-07.
- "Tracking Ocean Iron". Chemical & Engineering News 86 (35): 62. 2008. doi:10.1021/cen-v086n003.p062.
- Perkins, S. (2001). "New type of hydrothermal vent looms large". Science News 160 (2): 21. doi:10.2307/4012715. JSTOR 4012715.
- Deborah S. Kelley. Black Smokers: Incubators on the Seafloor. p. 2. http://www.amnh.org/learn/pd/earth/pdf/black_smokers_incubators.pdf
- "Boiling Hot Water Found in Frigid Arctic Sea". LiveScience. 24 July 2008. Retrieved 2008-07-25.
- "Scientists Break Record By Finding Northernmost Hydrothermal Vent Field". Science Daily. 24 July 2008. Retrieved 2008-07-25.
- Cross, A. (12 April 2010). "World's deepest undersea vents discovered in Caribbean". BBC News. Retrieved 2010-04-13.
- "Extremes of Eel City". Astrobiology Magazine. 28 May 2008. Retrieved 2007-08-30.
- Sysoev, A. V.; Kantor, Yu. I. (1995). "Two new species of Phymorhynchus (Gastropoda, Conoidea, Conidae) from the hydrothermal vents" (PDF). Ruthenica 5: 17–26.
- Botos, S. "Life on a hydrothermal vent". Hydrothermal Vent Communities.
- Van Dover, C. L. "Hot Topics: Biogeography of deep-sea hydrothermal vent faunas". Woods Hole Oceanographic Institution.
- Beatty, J.T.; et al. (2005). "An obligately photosynthetic bacterial anaerobe from a deep-sea hydrothermal vent". Proceedings of the National Academy of Sciences 102 (26): 9306–10. Bibcode:2005PNAS..102.9306B. doi:10.1073/pnas.0503674102. PMC 1166624. PMID 15967984.
- Gold, T. (1992). "The Deep Hot Biosphere". Proceedings of National Academy of Sciences 89 (13): 6045–9. Bibcode:1992PNAS...89.6045G. doi:10.1073/pnas.89.13.6045. PMC 49434. PMID 1631089.
- Gold, T. (1999). The Deep Hot Biosphere. Springer. ISBN 0-387-95253-5.
- Proskurowski, G.; et al. (2008). "Abiogenic Hydrocarbon Production at Lost City Hydrothermal Field". Science 319 (5863): 604–7. doi:10.1126/science.1151194. PMID 18239121.
- Wächtershäuser, G. (1990). "Evolution of the First Metabolic Cycles" (PDF). Proceedings of National Academy of Sciences 87 (1): 200–4. Bibcode:1990PNAS...87..200W. doi:10.1073/pnas.87.1.200. PMC 53229. PMID 2296579.
- Tunnicliffe, V. (1991). "The Biology of Hydrothermal Vents: Ecology and Evolution". Oceanography and Marine Biology an Annual Review 29: 319–408.
- Chemistry of seabed’s hot vents could explain emergence of life. Astrobiology Magazine 27 April 2015.
- "Bio-inspired CO2 conversion by iron sulfide catalysts under sustainable conditions. (PDF) Nora H. de Leeuw, et. al. Chemical Communications, 2015, 51, 7501-7504. DOI: 10.1039/C5CC02078F. 24 March 2015.
- Degens, E. T. (1969). Hot Brines and Recent Heavy Metal Deposits in the Red Sea. Springer-Verlag.
- "Dive and Discover: Expeditions to the Seafloor". www.divediscover.whoi.edu. Retrieved 2016-01-04.
- Lonsdale, P. (1977). "Clustering of suspension-feeding macrobenthos near abyssal hydrothermal vents at oceanic spreading centers". Deep Sea Research 24 (9): 857. Bibcode:1977DSR....24..857L. doi:10.1016/0146-6291(77)90478-7.
- "New undersea vent suggests snake-headed mythology" (Press release). EurekAlert!. 18 April 2007. Retrieved 2007-04-18.
- "Beebe". Interridge Vents Database.
- German, C. R.; et al. (2010). "Diverse styles of submarine venting on the ultraslow spreading Mid-Cayman Rise" (PDF). Proceedings of the National Academy of Sciences 107 (32): 14020–5. Bibcode:2010PNAS..10714020G. doi:10.1073/pnas.1009205107. PMC 2922602. PMID 20660317. Retrieved 2010-12-31. Lay summary – SciGuru (11 October 2010).
- "Deepest undersea vents discovered by UK team". BBC. 21 February 2013. Retrieved 21 February 2013.
- Broad, William J. (2016-01-12). "The 40,000-Mile Volcano". The New York Times. ISSN 0362-4331. Retrieved 2016-01-17.
- Beaulieu, S. E.; Baker, E. T.; German, C. R.; Maffei, A. R. (2013). "An authoritative global database for active submarine hydrothermal vent fields". Geochemistry Geophysics Geosystems 14: 4892–4905. doi:10.1002/2013GC004998.
- Perkins, W. G. (1984). "Mount Isa silica dolomite and copper orebodies; the result of a syntectonic hydrothermal alteration system". Economic Geology 79 (4): 601. doi:10.2113/gsecongeo.79.4.601.
- We Are About to Start Mining Hydrothermal Vents on the Ocean Floor. Nautilus; Brandon Keim. 12 September 2015.
- "The dawn of deep ocean mining". The All I Need. 2006.
- "Nautilus Outlines High Grade Au - Cu Seabed Sulphide Zone" (Press release). Nautilus Minerals. 25 May 2006.
- "Neptune Minerals". Retrieved August 2, 2012.
- Birney, K.; et al. "Potential Deep-Sea Mining of Seafloor Massive Sulfides: A case study in Papua New Guinea" (PDF). University of California, Santa Barbara, B.
- "Treasures from the deep". Chemistry World (Royal Society of Chemistry). January 2007.
- Devey, C.W.; Fisher, C.R.; Scott, S. (2007). "Responsible Science at Hydrothermal Vents" (PDF). Oceanography 20 (1): 162–72. doi:10.5670/oceanog.2007.90.
- Johnson, M. (2005). "Oceans need protection from scientists too". Nature 433 (7022): 105. Bibcode:2005Natur.433..105J. doi:10.1038/433105a. PMID 15650716.
- Johnson, M. (2005). "Deepsea vents should be world heritage sites". MPA News 6: 10.
- Tyler, P.; German, C.; Tunnicliff, V. (2005). "Biologists do not pose a threat to deep-sea vents". Nature 434 (7029): 18. Bibcode:2005Natur.434...18T. doi:10.1038/434018b. PMID 15744272.
- Van Dover CL, Humphris SE, Fornari D, Cavanaugh CM, Collier R, Goffredi SK, Hashimoto J, Lilley MD, Reysenbach AL, Shank TM, Von Damm KL, Banta A, Gallant RM, Gotz D, Green D, Hall J, Harmer TL, Hurtado LA, Johnson P, McKiness ZP, Meredith C, Olson E, Pan IL, Turnipseed M, Won Y, Young CR 3rd, Vrijenhoek RC (2001). "Biogeography and ecological setting of Indian Ocean hydrothermal vents". Science 294 (5543): 818–23. Bibcode:2001Sci...294..818V. doi:10.1126/science.1064574. PMID 11557843.
- Van Dover, Cindy Lee (2000). The Ecology of Deep-Sea Hydrothermal Vents. Princeton University Press. ISBN 0-691-04929-7.
- Beatty JT, Overmann J, Lince MT, Manske AK, Lang AS, Blankenship RE, Van Dover CL, Martinson TA, Plumley FG (2005). "An obligately photosynthetic bacterial anaerobe from a deep-sea hydrothermal vent". Proceedings of the National Academy of Sciences 102 (26): 9306–10. Bibcode:2005PNAS..102.9306B. doi:10.1073/pnas.0503674102. PMC 1166624. PMID 15967984.
- Glyn Ford and Jonathan Simnett, Silver from the Sea, September/October 1982, Volume 33, Number 5, Saudi Aramco World Accessed 17 October 2005
- Ballard, Robert D., 2000, The Eternal Darkness, Princeton University Press.
- Anaerobic respiration on tellurate and other metalloids in bacteria from hydrothermal vent fields in the eastern pacific ocean
- Andrea Koschinsky, Dieter Garbe-Schönberg, Sylvia Sander, Katja Schmidt, Hans-Hermann Gennerich and Harald Strauss (August 2008). "Hydrothermal venting at pressure-temperature conditions above the critical point of seawater, 5°S on the Mid-Atlantic Ridge". Geology 36 (8): 615–618. doi:10.1130/G24726A.1. Retrieved 18 June 2010.
- Catherine Brahic (4 August 2008). "Found: The hottest water on Earth". New Scientist. Retrieved 18 June 2010. External link in
- Josh Hill (5 August 2008). "'Extreme Water' Found at Atlantic Ocean Abyss". The Daily Galaxy. Retrieved 18 June 2010. External link in
|Wikimedia Commons has media related to Hydrothermal vents.|
- Ocean Explorer (www.oceanexplorer.noaa.gov) - Public outreach site for explorations sponsored by the Office of Ocean Exploration.
- Hydrothermal Vents Video - The Smithsonian Institution's Ocean Portal
- Vent geochemistry
- a good overview of hydrothermal vent biology, published in 2006 (PDF)
- Images of Hydrothermal Vents in Indian Ocean- Released by National Science Foundation
- How to Build a Hydrothermal Vent Chimney
- NOAA, Ocean Explorer YouTube Channel | https://en.wikipedia.org/wiki/Black_smoker |
4.1875 | How to identify parallel lines, a line parallel to a plane, and two parallel planes.
How to find the angle between planes, and how to determine if two planes are parallel or perpendicular.
How to compute the sum of two vectors or the product of a scalar and a vector.
How to write an equation for the coordinate planes or any plane that is parallel to one.
How to find a vector normal (perpendicular) to a plane given an equation for the plane.
Understanding the differences between vectors and scalar quantities.
How to form sentences with parallel structure.
How resistors in parallel affect current flow
How capacitors in parallel affect current flow.
How to plot complex numbers on the complex plane.
How to determine whether two lines in space are parallel or perpendicular.
How to take the converse of the parallel lines theorem.
How to mark parallel lines, how to show lines are parallel, and how to compare skew and parallel lines.
How to find additive and multiplicative inverses.
How to describe and label point, line, and plane. How to define coplanar and collinear.
Vocabulary of multiples and least common multiples | https://www.brightstorm.com/tag/scalar-multiple-parallel-planes/ |
4.21875 | States of matter
Have you ever baked—or purchased—a loaf of bread, muffins or cupcakes and admired the fluffy final product? If so, you have appreciated the work of expanding gases! They are everywhere—from the kitchen to the cosmos. You’ve sampled their pleasures every time you’ve eaten a slice of bread, bitten into a cookie or sipped a soda. In this science activity you’ll capture a gas in a stretchy container you’re probably pretty familiar with—a balloon! This will let you to observe how gases expand and contract as the temperature changes.
Everything in the world around you is made up of matter, including an inflated balloon and what’s inside of it. Matter comes in four different forms, known as states, which go (generally) from lowest to highest energy. They are: solids, liquids, gases and plasmas. Gases, such as the air or helium inside a balloon, take the shape of the containers they’re in. They spread out so that the space is filled up evenly with gas molecules. The gas molecules are not connected. They move in a straight line until they bounce into another gas molecule or hit the container’s wall, and then they rebound and continue in another direction until they hit something else. The combined motion energy of all of the gas molecules in a container is called the average kinetic energy.
This average kinetic (motional) energy changes in response to temperature. When gas molecules are warmed, their average kinetic energy also increases. This means they move faster and have more frequent and harder collisions inside of the balloon. When cooled, the kinetic energy of the gas molecules decreases, meaning they move more slowly and have less frequent and weaker collisions.
- Freezer with some empty space
- Two latex balloons that will inflate to approximately nine to 12 inches
- Piece of string, at least 20 inches long
- Permanent marker
- Cloth tape measure. (A regular tape measure or ruler can also work, but a cloth tape measure is preferable.)
- Scrap piece of paper and a pen or pencil
- Clock or timer
- A helper
- Make sure your freezer has enough space to easily fit an inflated balloon inside. The balloon should not be smushed or squeezed at all. If you need to move food to make space, be sure to get permission from anybody who stores food in the freezer. Also make sure to avoid any pointy objects or parts of the freezer.
- Blow up a balloon until it is mostly—but not completely—full. Then carefully tie it off with a knot. With your helper assisting you, measure the circumference of the widest part of the balloon using a cloth tape measure or a piece of string (and then measure the string against a tape measure). What is the balloon’s circumference?
- Inflate another balloon so it looks about the same size as the first balloon, but don’t tie it off yet. Pinch the opening closed between your thumb and finger so the air cannot escape. Have your helper measure the circumference of the balloon, then adjust the amount of air inside until it is within about half an inch or less (plus or minus) of the first balloon’s circumference (by blowing in more air, or letting a little escape). Then tie off the second balloon.
- Turn one of the balloons so you can look at the top of it. At the very top it should have a slightly darker spot. Using the permanent marker, carefully make a small spot in the center of the darker spot.
- Then take a cloth tape measure (or use a piece of string and a regular tape measure or ruler) and carefully make two small lines with the permanent marker at the top of the balloon that are two and one half inches away from one another, with the darker spot as the midpoint. To do this you can center the tape measure so that its one-and-one-quarter-inch mark is on the small spot you made and then make a line at the zero and two-and-one-half-inch points.
- Repeat this with the other balloon so that it also has lines that are two and one half inches apart on its top.
- Somewhere on one balloon write the number “1” and on the other balloon write the number “2.”
- Because it can be difficult to draw exact lines on a balloon with a thick permanent marker, now measure the exact distance between the two lines you drew on each balloon, measuring from the outside of both lines. (For example, the distance might be two and three eighths inches or two and five eighths inches.) Write this down for each balloon (with the balloon’s number) on a scrap piece of paper. Why do you think it’s important to be so exact when measuring the distances?
- Put balloon number 1 in the freezer in the area you cleared out for it. Leave it in the freezer for 45 minutes. Do not disturb it or open the freezer during this time. How do you think the size of the balloon will change from being in the freezer?
- During this time, leave balloon number 2 somewhere out at room temperature (not in direct sunlight or near a hot lamp).
- After balloon number 1 has been in the freezer for 45 minutes, bring your cloth tape measure (or piece of string and regular tape measure) to the freezer and, with the balloon still in the freezer (but with the freezer door open to let you access the balloon), quickly measure the distance between the two lines as you did before. Did the distance between the two lines change? If so, how did it change? What does this tell you about whether the size of the balloon changed? Why do you think this is?
- Then measure the distance between the two lines on balloon number 2, which stayed at room temperature. Did the distance between the two lines change? If so, how did it change? How did the balloon’s size change? Why do you think this is?
- Overall, how did the balloon change size when placed in the freezer? What do your results tell you about how gases expand and contract as temperature changes?
- Extra: After taking balloon number 1 out of the freezer leave it at room temperature for at least 45 minutes to let it warm up. Then remeasure the distance between the lines. How has the balloon changed size after warming up, if it changed at all?
- Extra: Try this activity again but instead of putting balloon number 1 in the freezer, put it in a hot place for 45 minutes, such as outdoors on a hot day or inside a car on a warm day. (Just make sure the balloon is not in direct sunlight or near a hot lamp, as this can deflate the balloon by letting the gas escape.) Does the balloon change size when put in a hot place? If so, how?
- Extra: In this activity you used air from your lungs but other gases might behave differently. You could try this activity again but this time fill the balloons with helium. How does using helium affect how the balloon changes size when placed in a freezer?
Observations and results
Did balloon number 1, which was placed in the freezer, shrink a little compared with balloon number 2, which stayed at room temperature?
You should have seen that when you put the balloon in the freezer, the distance between the lines decreased a little, from about two and a half inches to two and a quarter (or by a quarter inch, about 10 percent). The balloon shrank! The distance between the lines on the balloon kept at room temperature should have pretty much stayed the same (or decreased very slightly), meaning that the balloon shouldn’t have really changed size. The frozen balloon shrank because the average kinetic energy of the gas molecules in a balloon decreases when the temperature decreases. This makes the molecules move more slowly and have less frequent and weaker collisions with the inside wall of the balloon, which causes the balloon to shrink a little. But if you let the frozen balloon warm up, you would find that it gets bigger again, as big as the balloon that you left at room temperature the whole time. This is because the average kinetic energy would increase due to the warmer temperature, making the molecules move faster and hit the inside of the balloon harder and more frequently again.
More to explore
Looking for a Gas, from Rader’s Chem4Kids.com
Gases around Us, from BBC
Balloon Morphing: How Gases Contract and Expand, from Science Buddies
Racing to Win That Checkered Flag: How Do Gases Help?, from Science Buddies
This activity brought to you in partnership with Science Buddies | http://www.scientificamerican.com/article/size-changing-science-how-gases-contract-and-expand/?mobileFormat=true |
4.25 | What will be the fate of our moon? Will it remain in a stable orbit, crash back into Earth or drift off into space?
The Moon is gradually receding from the Earth, at a rate of about 4 cm per year. This is caused by a transfer of Earth's rotational momentum to the Moon's orbital momentum as tidal friction slows the Earth's rotation. That increasing distance means a longer orbital period, or month, as well.
To picture what is happening, imagine yourself riding a bicycle on a track built around a Merry-go-Round. You are riding in the same direction that it is turning. If you have a lasso and rope one of the horses, you would gain speed and the Merry-Go-Round would lose some. In this analogy, you and your bike represent the Moon, the Merry-Go-Round is the rotating Earth, and your lasso is gravity. In orbital mechanics, a gain in speed results in a higher orbit.
The slowing rotation of the Earth results in a longer day as well as a longer month. Once the length of a day equals the length of a month, the tidal friction mechanism would cease. (ie. Once your speed on the track matches the speed of the horses, you can't gain any more speed with your lasso trick.) That's been projected to happen once the day and month both equal about 47 (current) days, billions of years in the future. If the Earth and Moon still exist, the distance will have increased to about 135% of its current value.
Paul Walorski, B.A., Part-time Physics Instructor
'All of us, are truly and literally a little bit of stardust.' | http://www.physlink.com/Education/AskExperts/ae429.cfm |
4.25 | In a vast disc of gas and dust particles circling a young star, scientists have found evidence of a hypothesized but never-seen dust trap that may solve the mystery of how planets form.
We know planets that orbit stars are abundant throughout our galaxy, and likely throughout the universe as well, but until recently, scientists weren't exactly sure how those planets came to be.
The working theory is that they grew over time as tiny bits of dust collided and stuck together -- eventually forming comets, rocky planets and the cores of gaseous planets over millions of years.
But there is a problem with that theory: Once these tiny bits of dust grow to the size of pebbles or boulders, they are likely to either smash into one another and break apart, or spiral toward their central star where their growth is inhibited.
Theoretical astronomers had hypothesized that the flat discs of dust and gas that often surround young stars might occasionally contain dust traps -- an area in the disc where the gas is more dense and can create a barrier that keeps more substantial bits of dust from falling toward the star.
And then, quite by accident, a team of scientists found evidence of one in a disc around the massive young star Oph IRS 48, about 400 light-years from Earth.
The team, led by Nienke van der Marel, a doctoral student at Leiden Observatory in the Netherlands, was using ALMA, an array of radio telescopes in Chile, to observe just the gas in the disc. But ALMA also gave them data about the dust in the ring, "for free," Van Der Marel said.
When the researchers looked at the dust data, they were confused: They had expected to see dust particles distributed evenly throughout the disc, but the images they saw showed the dust clumped on about one-third of the disc in the shape of a cashew.
"The first time we saw the image of the dust, we thought there must be something wrong with the data," Van Der Marel told the Los Angeles Times. "But we had a really high, clear signal so it was clear it wasn't a mistake. Then we started looking into possibilities that could explain the separation between the gas and the dust."
It turns out that they had just made the first observation of a dust trap.
This dust trap was caused by a large gaseous planet or perhaps a small star that is also circling the central star, Van Der Marel said. She and her colleagues had observed that there was a hole in the gas disc, likely filled by one of these two types of bodies (but they are not sure which one).
The gas around this hole is more dense than the rest of the disc, and can keep the dust particles that get stuck behind it from falling toward the central star.
Van Der Marel said this particular dust trap is not likely to create planets because of its location in the disc, but it could create comets as large as .6 miles across. Van Der Marel describes it as "comet factory."
Next, Van Der Marel said she planned to use ALMA to look for dust traps that are closer to their stars, where planet formation is more likely. A study describing the dust trap was published in Science this week.
See below for video of how scientists envision the dust trap works. | http://www.latimes.com/science/sciencenow/la-sci-sn-comet-factory-dust-trap-20130606-story.html |
4.4375 | About 13,000 years ago, the Earth was plunged into what is called the 'Big Freeze' — or more formally known as the 'Younger Dryas stadial' — where the planet's climate cooled significantly, ushering in a new glacial period that lasted for about 1,300 years.
It has been well-established that the Big Freeze was caused when a lake of melt-water sitting on the Laurentide Ice Sheet — a 2-3 km thick sheet of ice that covered most of the land-mass of what is now Canada and parts of the U.S. Midwest — broke through an ice-damn and rushed into the north Atlantic. This massive influx of frigid fresh water into the ocean disrupted the global circulation of heat and salt-content in the oceans, and quickly altered the Earth's climate.
"This episode was the last time the Earth underwent a major cooling, so understanding exactly what caused it is very important for understanding how our modern-day climate might change in the future," says Alan Condron, a physical oceanographer with the University of Massachusetts Amherst's Climate System Research Center, according to Science Daily.
There has been some debate over the years as to the path of this fresh water, though. The most commonly used hypothesis, first proposed by Wallace Broecker of Columbia University in 1989, was that the water flowed down the St. Lawrence River into the north Atlantic. Others suggested that it took a route down the Mackenzie River basin and into the Arctic Ocean. Now, Condron and research partner Peter Winsor, from the University of Alaska Fairbanks, have developed a new high-resolution ocean-ice circulation computer model that shows strong support for the latter idea.
This computer model, the most powerful so far created, runs on a supercomputer at the National Energy Research Science Computing Center in Berkeley, California. "With this higher resolution modeling, our ability to capture narrow ocean currents dramatically improves our understanding of where the fresh water may be going." said Condron and Winsor.
"The results we obtain are only possible by using a much higher computational power available with faster computers. Older models weren't powerful enough to model the different pathways because they contained too few data points to capture smaller-scale, faster-moving coastal currents." added Condron.
Condron and Winsor's simulations showed that were the waters from the Laurentide ice sheet to have flowed down the St. Lawrence, they would have entered the Atlantic Ocean waters around 3000 kilometres too far south to have disrupted the ocean circulation enough to cause the Big Freeze. However, simulations showing the water flowing into the Arctic Ocean via the Mackenzie river basin showed that the currents in the Arctic Ocean would have transported the cold, fresh waters to exactly where they were needed to cause the event — the sub-polar Atlantic Ocean, off the coast of Greenland.
"Dumping water in the Arctic is a very efficient way to … cool the Northern Hemisphere," says W. Richard Peltier, according to Science News. Peltier is a professor of physics at the University of Toronto, and director of UofT's Centre for Global Change Science.
Although Broecker's hypothesis had wide support, there was a lack of physical evidence along the St. Lawrence River, however there is evidence in boulders and gravel along the Mackenzie River basin that supports the idea of a massive flood around the time of the Big Freeze.
"This whole thing now hangs together beautifully," said Peltier.
"Our results are particularly relevant for how we model the melting of the Greenland and Antarctic Ice sheets now and in the future," said Condron. "It is apparent from our results that climate scientists are artificially introducing fresh water into their models over large parts of the ocean that freshwater would never have reached. In addition, our work points to the Arctic as a primary trigger for climate change. This is especially relevant considering the rapid changes that have been occurring in this region in the last 10 years." | https://ca.news.yahoo.com/blogs/geekquinox/blame-canada-ancient-massive-1-300-big-freeze-191925701.html |
4.03125 | Earth’s average temperature has remained more or less steady since 2001, despite rising levels of atmospheric carbon dioxide and other greenhouse gases—a trend that has perplexed most climate scientists. A new study suggests that the missing heat has been temporarily stirred into the relatively shallow waters in the western Pacific by stronger-than-normal trade winds. Over the past 20 years or so, trade winds near the equator—which generally blow from east to west—have driven warm waters of the Pacific ahead of them, causing larger-than-normal volumes of cool, deep waters to rise to the surface along the western coasts of Central America and South America. (Cooler-than-average surface waters are depicted in shades of blue, image from late July and early August 2007.) Climate simulations suggest that that upwelling has generally cooled Earth’s climate, stifling about 0.1°C to 0.2°C in warming that would have occurred by 2012 if winds hadn’t been inordinately strong, the researchers reported online yesterday in Nature Climate Change. Both real-world observations and the team’s simulations reveal that the abnormally strong winds—driven by natural variation in a long-term climate cycle called the Interdecadal Pacific Oscillation—have, for the time being, carried the “missing” heat to intermediate depths of the western Pacific Ocean. Eventually, possibly by the end of this decade, the inevitable slackening of the trade winds will bring the energy back to the ocean’s surface to be released to the atmosphere, fueling rapid warming, the scientists contend. | http://www.sciencemag.org/news/2014/02/scienceshot-pacific-ocean-keeping-earth-cool-now?mobile_switch=mobile |
4.5625 | A classic rhyme, Simple Simon and the Pie-Man, introduces students to the concepts of consumer and producer. Students learn that consumers are the people who buy and use goods and services. Producers make the goods and provide the services. When producers are working, they often use goods and services provided by other producers. These goods and services are called resources. An interactive activity helps students distinguish between consumers and producers. In a second activity, the students match producers with the resources needed to provide goods and services. A suggested follow-up lesson is We are Consumers and Producers which examines how students and their families function as consumers and producers in their homes and communities.
Students will be able to distinguish between people who produce goods and people who provide services to a community.
This lesson will help students become good consumers and producers by taking turns buying and selling things in a classroom-created market. Students will establish prices for items and observe what happens during the sale of those items.
The following lessons come from the Council for Economic Education's library of publications. Clicking the publication title or image will take you to the Council for Economic Education Store for more detailed information.
This publication contains 16 stories that complement the K-2 Student Storybook. Specific to grades K-2 are a variety of activities, including making coins out of salt dough or cookie dough; a song that teaches students about opportunity cost and decisions; and a game in which students learn the importance of savings.
9 out of 18 lessons from this publication relate to this EconEdLink lesson.
Designed primarily for elementary and middle school students, each of the 15 lessons in this guide introduces an economics concept through activities with modeling clay.
1 out of 17 lessons from this publication relate to this EconEdLink lesson.
This interdisciplinary curriculum guide helps teachers introduce their students to economics using popular children's stories.
1 out of 13 lessons from this publication relate to this EconEdLink lesson. | http://www.econedlink.org/economic-standards/EconEdLink-related-publications.php?lid=457 |
4.125 | A double standard is the application of different sets of principles for similar situations. A double standard may take the form of an instance in which certain concepts (often, for example, a word, phrase, social norm, or rule) are perceived as acceptable to be applied by one group of people, but are considered unacceptable—taboo—when applied by another group.
The concept of a double standard has long been applied (as early as 1872) to the fact that different moral structures are often applied to men and women in society. An example being that a man going out to bars and picking up a different woman to have sex with every night for two weeks, will probably be considered "macho", a "stud", or a "ladies' man". All positive terms, but a woman who went home with 14 different men in a two-week period in the same way as the male example, typically would be labeled a "slut" or a "whore", both being pejorative. Conversely, if a man cries, he is commonly seen as "weak" or "pathetic", denoting a negative connotation; but if a woman cries, she is commonly seen as "innocent" and "sensitive", denoting a compassionate connotation.
A double standard can therefore be described as a biased or morally unfair application of the principle that all are equal in their freedoms. Such double standards are seen as unjustified because they violate a basic maxim of modern legal jurisprudence: that all parties should stand equal before the law. Double standards also violate the principle of justice known as impartiality, which is based on the assumption that the same standards should be applied to all people, without regard to subjective bias or favoritism based on social class, rank, ethnicity, gender, religion, sexual orientation, age, or other distinctions. A double standard violates this principle by holding different people accountable according to different standards.
History of the concept
|This section is empty. You can help by adding to it. (December 2015)|
Policy of double standards
Policy of double standards is used to describe a situation when the assessment of the same phenomenon, process or event in the international relations, depends on character of the relations of the estimating parties with assessment objects. At identical intrinsic filling of action of one country get support and a justification, and other – is condemned and punished.
The phrase became a classical example of policy of double standards: "One man's terrorist is another man's freedom fighter", entered into use by the British writer Gerald Seymour in his work "Harry's Game" in 1975.
- "Double standard" Dictionary.com
- "Unjust Judgments on Subjects of Morality". The Ecclesiastical Observer (London: Arthur Hall and Co.) XXV: 167–170. April 1, 1872.
- Josephine E. Butler (Nov 27, 1886). "The Double Standard of Morality". Friends' Intelligencer and Journal (Philadelphia: Friends' Intelligencer Association). XLIII (48): 757–758.
- Satish Chandra Pandey. International Terrorism and the Contemporary World. Sarup & Sons, 2006. С. 17.
- Who said one man’s terrorist is another man’s revolutionary? | https://en.wikipedia.org/wiki/Double_standard |
4.0625 | Where did plants come from?
Plants' Adaptations for Life on Land
The first photosynthetic organisms were bacteria that lived in the water. So, where did plants come from? Evidence shows that plants evolved from freshwater green algae, a protist (Figure below). The similarities between green algae and plants is one piece of evidence. They both have cellulose in their cell walls, and they share many of the same chemicals that give them color. So what separates green algae from green plants?
The ancestor of plants is green algae. This picture shows a close up of algae on the beach.
There are four main ways that plants adapted to life on land and, as a result, became different from algae:
- In plants, the embryo develops inside of the female plant after fertilization. Algae do not keep the embryo inside of themselves but release it into water. This was the first feature to evolve that separated plants from green algae. This is also the only adaptation shared by all plants.
- Over time, plants had to evolve from living in water to living on land. In early plants, a waxy layer called a cuticle evolved to help seal water in the plant and prevent water loss. However, the cuticle also prevents gases from entering and leaving the plant easily. Recall that the exchange of gasses—taking in carbon dioxide and releasing oxygen—occurs during photosynthesis.
- To allow the plant to retain water and exchange gases, small pores (holes) in the leaves called stomata also evolved (Figure below). The stomata can open and close depending on weather conditions. When it's hot and dry, the stomata close to keep water inside of the plant. When the weather cools down, the stomata can open again to let carbon dioxide in and oxygen out.
- A later adaption for life on land was the evolution of vascular tissue. Vascular tissue is specialized tissue that transports water, nutrients, and food in plants. In algae, vascular tissue is not necessary since the entire body is in contact with the water, and the water simply enters the algae. But on land, water may only be found deep in the ground. Vascular tissues take water and nutrients from the ground up into the plant, while also taking food down from the leaves into the rest of the plant. The two vascular tissues are xylem and phloem. Xylem is responsible for the transport of water and nutrients from the roots to the rest of the plant. Phloem carries the sugars made in the leaves to the parts of the plant where they are needed.
Stomata are pores in leaves that allow gasses to pass through, but they can be closed to conserve water.
- Plants evolved from freshwater green algae.
- Plants have evolved several adaptations to life on land, including embryo retention, a cuticle, stomata, and vascular tissue.
Use the resources below to answer the questions that follow.
- The Role of Xylem Tissue and Stomata at http://www.youtube.com/watch?v=QBMkiLIyETc (3:34)
- The Phloem at http://www.youtube.com/watch?v=M4onP3_4ERU (3:03)
- In what groups of plants do you find xylem and phloem? Hint: refer to previous lesson if necessary.
- What are the main components of sap?
- Compare and contrast xylem and phloem.
- What does each transport?
- How are their structures similar?
- What is "transpirational pull"? How is it key to the functioning of xylem?
- How are plants different from green algae? How are they the same?
- What is the purpose of vascular tissue?
- How do plants prevent excess water loss?
- Compare xylem to phloem.
- What is the role of stomata? | http://www.ck12.org/life-science/plants-adaptations-for-life-on-land-in-life-science/lesson/Plants-Adaptations-for-Life-on-Land/ |
4.0625 | The deep-water canyons, seamounts, and underwater mountain ranges in the coastal waters of New England are gaining recognition for their importance to the health of fish populations like the struggling Atlantic cod. But these unique geological formations are also critical for the marine mammals that call the North Atlantic home.
Hail the Whales
The Atlantic coast is a veritable highway for migrating whales, which travel from breeding grounds in the south to feeding grounds in the north each year. But with many species facing reduced habitat, diminished populations, and increased boat traffic, this annual journey has become more and more difficult. These growing threats make areas of food abundance and shelter, such as Cashes Ledge and the New England Canyons and Seamounts, ever more critical to the success of migrating whales’ journeys.
Cashes Ledge and the canyons and seamounts are unique in the Atlantic because their topography creates ideal conditions for plankton, zooplankton, and copepods – the main food for migrating minke, right, and humpback whales – to thrive. They also serve as spawning ground for larger food sources – including many squish, fish, and crustaceans. Altogether, this rich abundance of species adds up to a bountiful buffet for whales and other marine mammals.
Sperm whales have often been spotted in the waters of seamounts, taking advantage of the reliable food, and Cashes Ledge serves as an oasis for hungry whales on their journey north.
The healthy kelp domino effect
These areas are not only crucial to whales; other marine mammals depend on them as well. Cashes Ledge boasts the largest coldwater kelp forest on the Atlantic seaboard, a habitat that creates ideal spawning grounds for cod, herring, and hake. The abundance of fish in turn feeds seals and porpoises, as well as whales.
Scientists have noted a positive correlation between the size of an undersea kelp forest and populations of marine mammals, suggesting that more, healthy kelp means more marine mammals. That makes protecting areas with large kelp forests such as Cashes Ledge even more important.
Even marine mammals that don’t visit Cashes Ledge itself still benefit from the protection of the area’s kelp forest, thanks to the “spillover effect:” Fish spawned in the shelter of the rocky crevasses and havens of the kelp forests disperse beyond Cashes Ledge and feed sea animals throughout the Gulf of Maine.
Across the globe, underwater mountain and canyon habitats have proved to be important areas where marine mammals congregate to feed – and the canyons, seamounts, and ledges off the coast of New England are no different. Unfortunately, these important ecosystems are delicate and facing threats from harmful fishing gear and climate change.
With so much at stake, it is vital to protect these places – not only for their inherent ecological value, but also so that they may sustain the mammals that depend on them.
Imagine it’s 20 years from now, and your grandchild is about to head to bed – but first, she wants to hear a favorite bedtime story, “the one about the fish.” You pull it off the shelf – Mark Kurlansky’s The Cod’s Tale – and begin reading. Unbidden, her eyes widen at the vivid illustrations of the fish with a single chin whisker, at how it has millions of babies, and at how it gave birth to this country.
Every time you read her the story, she asks the same question: “Can we go catch a cod tomorrow?” Every time, you have to tell her there aren’t any more cod in New England. And, every time she asks: “Why?” But you never really have a good answer for her.
No Happy Ending in Sight for Cod The crisis in New England’s cod fishery was once again on the agenda at the New England Fishery Management Council’s December meeting in Portland, Maine. And once again, managers failed to take the basic actions needed for a concerted effort to restore this iconic fish.
In addition to the collapse of the cod stock in the Gulf of Maine, New England is facing even greater declines of cod on Georges Bank, the historically important fishing area east of Cape Cod.
The outlook for cod keeps getting worse, and the “actions” taken by the Council are so unlikely to make a difference that we must continue our call to save cod.
The Worst of the Worst Some recent analyses have concluded that the cod population on Georges Bank is the lowest ever recorded – roughly 1 percent of what scientists would consider a healthy population. Other estimates put the population at only about 3 to 5 percent of the healthy target. The cod stock in the Gulf of Maine is hovering for the second year in a row at roughly 3 percent of the targeted healthy population.
At its meeting last week, the Council did set new, lower catch limits for the severely depleted Georges Bank cod, but, true to form, those limits don’t go far enough. The Council is clearly in denial about the state of this fishery. If there is even a chance the number is 1 percent, this should be cause for major distress among Council members and fishermen alike.
The Council’s actions (or, really, lack of action) leave me wondering, again, whether anyscience would ever be “enough” to compel them to halt the fishing of cod entirely.
Habitat Loss Adds Fuel to the Fire Astoundingly, the Council also decided earlier this year to strip protection for important cod habitat on Georges Bank – amounting to a loss of some 81 percent of the formerly protected cod habitat.
To recover, depleted fish populations need large areas protected from fishing and fishing gears; they need protected habitat where they can find food and shelter and reproduce; and they need large areas where female cod can grow old and reproduce prolifically. However, our fisheries managers – who are entrusted with safeguarding these precious resources for future generations as well as for current fishermen – ignore this science and continue to stubbornly deny the potential scope of this problem.
This is an especially irresponsible stance in light of climate change. Not only are New England’s cod struggling to recover from decades of overfishing and habitat degradation, now the rapid rise in the region’s sea temperatures is further stressing their productivity. Protected habitats help marine species survive ecological stresses like warming waters.
If a Cod Fish Dies But No One Records It, Did It Ever Really Exist? As if matters couldn’t get worse, the Council also voted to cut back significantly on the numbers of observers that groundfishing boats would have to have on-board to record what fish are actually coming up in their nets. This is little more than the Council’s blessing of unreported discards of cod and flounder and other depleted fish.
We should be protecting more of these areas, not fewer; we should be doing more for these iconic fish, not less. So why is the Council making it so much harder for cod to recover? Perhaps it is simply contrary to human nature to expect the Council’s fishermen members to impose harsh measures on themselves when the benefits may only be seen by future generations. Perhaps federal fishery councils comprising active fishermen only work well with healthy fisheries.
Federal officials at NOAA Fisheries will have the final say on these Council decisions to strip habitat protections, cutback on monitoring, and continue fishing on cod. We can only hope those officials will start taking the tough but necessary actions, giving New Englanders at least a semblance of hope that our grandchildren will be able to catch a codfish, not just read about one in a book.
The Gulf of Maine is warming fast — faster than almost any other ocean area in the world. To say this is alarming is an understatement, and action is needed today to permanently protect large areas of the ocean, which scientists say is one of the best buffers against the disastrous effects of climate change.
To that end, a diverse group of marine-oriented businesses, hundreds of marine scientists, aquaria, conservation organizations and members of the public are calling on the Obama administration to designate the Cashes Ledge Closed Area and the New England Coral Canyons and Seamounts as the first Marine National Monuments in the Atlantic.
Conservation Law Foundation has worked for years to permanently protect the remarkable Cashes Ledge area. This biodiversity hotspot provides refuge for a stunning array of ocean wildlife — from cod to endangered right whales, bluefin tuna to Atlantic wolffish — and a rare lush kelp forest. The New England canyons and seamounts similarly shelter an incredible breadth of sea life, including spectacular ancient coral formations. Public support is widespread and growing. In September, more than 600 people attended a sold-out event hosted by the New England Aquarium and National Geographic Society where scientists discussed why these places are unique natural treasures. More than 160,000 people have electronically petitioned the president for monument protection.
America has a long tradition of protecting our remarkable natural heritage and biological bounty. In contrast to our public lands and the Pacific Ocean, there are no areas in the Atlantic that are fully protected as national monuments. But why monument protection?
Unlike fishery management closed areas or national marine sanctuaries, national monument designation protects against all types of commercial extraction that are harmful and can damage critical habitat: fishing, oil and gas exploration, sand and gravel mining, and more.
Scientists say large-scale marine habitat protection is necessary to increase ocean resiliency in the face of climate change. Undisturbed underwater “laboratories” in places with relatively pristine habitats, like the Cashes Ledge area and the canyons and seamounts area, will be key in studying how — and how well — we are managing these already changing ocean ecosystems. These irreplaceable habitats can only play that role when protected in their entirety.
Current protections by the New England Fishery Management Council are critical but not sufficient, as they are temporary, only limited to commercial fish species, and any coral protections are only discretionary. A monument designation protects all sea life and makes that protection permanent. It would be managed by scientists and others with ecological expertise (including but not limited to fisheries expertise). Fishery management councils were not designed and are not in the business of protecting scientifically unique and ecologically critical areas in the ocean.
Permanent closure will also benefit collapsed fish populations like Atlantic cod, which would be able to rebuild and sustain themselves at healthy levels. Research is beginning to show that refuges could help struggling species like cod produce larger, older and significantly more productive females that could help recovery when their offspring eventually spill out to restock fishing in surrounding waters. The fishing industry is poised to benefit in the long term when commercially important fish are able to rebound.
Protecting the few unique marine places we have left is good for the fishermen and communities that rely on a healthy and abundant ocean for their livelihoods and is our obligation to future generations.
In recent weeks, we learned more sobering news for New England’s cod population. A paper published in Science detailed how rapidly increasing ocean temperatures are reducing cod’s productivity and impacting – negatively – the long-term rebuilding potential of New England’s iconic groundfish. The paper confirmed both the theoretical predictions associated with climate change and the recent scientific federal, state, and Canadian trawl surveys that reported a record-low number of cod caught in recent months.
To be clear, the Science authors do not conclude that ocean temperature changes associated with climate change have caused the collapse of cod. We have management-approved overfishing of cod to thank for that.
What rising ocean temperatures do seem to be doing, according to the Science paper, is dramatically changing the productivity of the remaining cod stocks. This makes it more difficult for cod to recover from overfishing today than at any other time in history, and perhaps reduces the ultimate recovery potential even if all fishing were halted. Stock assessments conducted without taking these productivity reductions into full account will dramatically overestimate cod populations and, in turn, fishing quotas.
The Science paper is potentially very important, with major implications for fishing limits on cod for decades to come, But stock assessment scientists have warned for years that their recent models were likely overestimating the amount of cod actually in the water – and the corresponding fishing pressure the stock could withstand. Unfortunately, those warnings have fallen on deaf ears at the New England Fishery Management Council.
In fact, the managers at the Council, dominated by fishermen and state fisheries directors with short-term economic agendas, could hardly have done more than they already have to jeopardize Atlantic cod’s future—climate change or not.
Overfishing, a Weakened Gene Pool, and the Loss of Productive Female Fish
As a result of chronic overfishing, New England’s cod population is likely facing what geneticists call a “population bottleneck,” meaning that the diversity of the remaining cod gene pool is now so greatly reduced that the fish that are left are less resilient to environmental stresses like increasing sea temperatures.
Overfishing has also caused the collapse of the age structure of the cod populations by removing almost all of the larger, more reproductive females (also known as the Big, Old, Fat, Fecund Females, or BOFFFS). Scientists have previously warned that losing these old spawners is a problem for cod productivity, but this new research suggests that the potential damage from their elimination may be significantly greater than imagined as a result of poor, climate change–related ecological conditions.
The Science paper hypothesizes that an underlying factor in the productivity decline of cod this past decade was the correlation between extremely warm spikes in ocean temperatures and the drop in zooplankton species that are critical to the survival of larval cod. With fewer zooplankton, fewer cod larvae make it to their first birthday.
The impacts of this zooplankton decline on cod productivity, however, could be exacerbated by the loss of the BOFFFs. Here’s why:
Cod start to spawn at three to four years old, but young females produce significantly fewer and weaker eggs and cod larvae than their older counterparts. Those elder female fish, on the other hand, produce larger, more viable eggs – sometimes exponentially more healthy eggs – over longer periods of time. If the older female cod population had still been plentiful, they might have produced larvae more capable of surviving variations in zooplankton abundance.
Perhaps the continued presence of larger, older, spawning females to the south of New England (where there is no commercial cod fishery) is one of the reasons that the cod fishery in the nearby warm waters off New Jersey is healthier now than it has been in recent history.
The Cod Aren’t Completely Cooked Yet: Four Potential Solutions
Cod have been in trouble since the 1990s, and now climate change is magnifying these troubles. This new reality, however, is not cause for us to throw in the towel. There are actions that our fishery managers can take now that will make a difference.
First, large cod habitat areas have to be closed to fishing – permanently. This is the only way to protect the large females and increase their number. Designating cod refuges such as the Cashes Ledge Closed Area as a marine national monument will remove the temptation for fishery councils – always under pressure to provide access to fish – to reopen them in the future.
Such monuments would also sustain a critical marine laboratory where more of these complex interactions between cod and our changing ocean environment can be studied and understood.
Second, managers need to gain a better understanding of the cod populations south of Cape Cod. While it is well and good to land “monster” female cod on recreational boat trips, those fish may be the key to re-populating Georges Bank. Caution, rather than a free-for-all, is the best course of action until the patterns of movement of those cod populations, as related to ocean temperature increases, are better understood.
Third, as observed in the Science paper, stock assessment models as well as guidance from the Council’s Science and Statistical Committee must start incorporating more ecosystem variables and reflecting a more appropriate level of scientific precaution in the face of the reality of climate change shifts. Enough talk about scientific uncertainty and ecosystem-based fisheries management; action is needed, and science should have the lead in guiding that action.
Finally, the importance of funding data collection and fishery science is evident from this important Science paper, which was supported by private, philanthropic dollars. NOAA should be undertaking this sort of work – but it is not in a position to even provide adequate and timely stock assessments, because limited funding forces the agency to use the existing outdated models.
NOAA’s funding limitations are constraining both collection of the essential field data needed to understand our changing world as well as the analysis of that data into meaningful and appropriate management advice. If Congress can find $33 million to give fishermen for the most recent “groundfish disaster,” it ought to be able to find money to prevent such avoidable disasters in the future.
Ultimately, the Science paper shines some much needed light on our climate change–related fishery issues in New England, but we can’t let it overshadow decades of mismanagement or justify a fatalistic attitude toward cod rebuilding. Steps can and must be taken, and fishery managers are still on the hook for the success or failure of our current and future cod stocks.
One of the North Atlantic’s smallest ocean critters is making big waves in New England.
Over the last decade, we’ve seen the collapse of our iconic Atlantic cod fishery due to extreme overfishing. Now, a new study is showing a potentially disastrous link between the effects of climate change and the ailing species’ chance of recovery.
Warming waters are bound to be bad news for a cold water fish, but the problem goes much deeper than that, affecting the entire life cycle of the species. Some of this is due to tiny, microscopic creatures called zooplankton. So what are these little guys, and why are they so important?
Zooplankton is a categorization of a type of ocean organism that includes various species, including Pseudocalanus spp, and Centropages typicus. These two species happen to be the major food source of larval cod in the Gulf of Maine.
Zooplankton, which are usually smaller than 1/10 of an inch, play a major role in the Atlantic’s food web. When there are lots of them, things are pretty good. Young fish prey on them and grow to be healthy, adult fish.
But when there aren’t enough plankton to go around, species like Atlantic cod can suffer. When cod larvae aren’t easily able to find the food they need to grow, fewer of them make it to their first birthday.
And without lots of cod that survive to be at least 4 years old (the age at which females begin spawning), the recovery of the entire stock can stall. The stock needs larger, older, more productive females to thrive in order to have any hope for recovery.
Warming and shifting
But why would the plankton be in such short supply? This is where climate change comes in. According to NOAA, temperature changes can cause the redistribution of plankton communities. In the Gulf of Maine, scientists have found fewer plankton in the same areas where cod populations have been found to be struggling. The shifts in temperature lead to the displacement of a critical food source, making it difficult for young cod to survive.
With the Gulf of Maine is warming faster than 99% of ocean areas, this is an enormously alarming problem. More temperature changes and the shifting of plankton populations could make it even harder for New England cod populations to return to healthy, sustainable levels.
While the cod crisis is the result of many factors – but the loss of tiny zooplankton is a big problem. When considering how to best help cod stocks recover, fishery managers must take into account the effects of climate change, or else risk the total collapse of the species.
In honor of Halloween, we’ve decided to highlight one of the more creepy looking fish that can be found in the waters off of New England. The monkfish (Lophius americanus), also known as goose-fish, anglerfish, and sea-devil, is considered a delicacy abroad, but until recently has been overlooked in America, perhaps due to its obtrusive appearance.
The monkfish is highly recognizable, with its brown, tadpole-shaped body, and its gaping, fang-filled mouth. These eerie-looking fish can be found from Newfoundland to Georges Bank, and all the way down to North Carolina. They prefer to dwell on the sandy or muddy ocean-floor, where they feed on a variety of small lobsters, fish, and eels. Monkfish are typically found at depths of 230-330 feet, but have been caught in waters as deep as 2,700 feet; they have also been known to occasionally rise to the surface and consume small, unsuspecting birds. Females can grow up to forty inches and males up to thirty-five inches, and both can weigh up to seventy pounds. The average market size fish is around seventeen to twenty inches long.
Before the 1960s, monkfish were considered to be undesirable bycatch. However, in the wake of the collapse of the New England Atlantic Cod fishery, the monkfish has slowly started to become a more common alternative, in part due to awareness campaigns about “underutilized species” in New England. Now, monkfish is caught to supply both international and domestic demand – the tail is prized for its firm texture and sweet taste, perfect for baking and poaching, and the liver is used in Japanese sushi.
In fact, in the last two decades, fishing has increased so dramatically that monkfish stocks started to decline. Landings peaked in 1997 at sixty million pounds. However, thanks to the quick action of both the United States and Canada, a management plan was put in place and the stock population started to increase and stabilize. Landings now average around thirty-five million pounds annually. Monkfish are caught using trawls, gillnets, and dredges. The fishery is managed by the National Oceanic and Atmospheric Administration (NOAA), the New England Regional Fishery Management Council, and the Mid-Atlantic Fishery Management Council. These organizations do not impose annual catch limits, but do limit daily catches as well as limit access to the fishery. Nevertheless, the catch is still exceeding target catch levels in certain locations.
Current threats to monkfish are common among New England marine species: warming temperatures, ocean acidification, and habitat loss.
NOAA Fishwatch considers monkfish to be well managed and a “smart seafood choice” – however, it is still vulnerable, and the fishery should continue to be closely monitored, or it could suffer the same fate as other groundfish fisheries.
So, if you are looking for a spooky-themed seafood dish for this weekend’s festivities, it might be time to give monkfish a try… It would also make one unique Halloween costume!
October is National Seafood Month! To celebrate, I spoke with Andrea Tomlinson, General Manager of New Hampshire Community Seafood, an organization committed to supporting the state’s
fishing industry and ensuring community access to fresh, locally caught seafood.
We hear a lot about sustainable seafood in New England, but what does it really mean, and how can we, as consumers (and seafood lovers), impact the future of the fishing industry – all the while eating more healthy fish?
AY: What is “sustainable seafood”?
AT: I think few people understand what it really means – as more people use the term, it seems to have lost meaning. For me, sustainable seafood simply means that our fishermen are only taking an amount of a particular population that does not prevent parent fish from reproducing at the same level the following year. If fishermen leave the pregnant and older fish alone, and take just the younger fish, it’s more likely to be sustainable. The fish population must be able to sustain itself while also being fished for commercial purposes.
AY: Do you think most of the industry fishes this way?
AT: No. In the past, it was a free for all. Fishermen took whatever they wanted — cod was our fish, there was lots of it, so we took lots of it. Today, our small New England fishermen are still fishing the same amount (and taking the parent fish), but there are other, bigger players in the game. Once cod was shown to be a successful industry, the number of fishermen increased – and now the populations are suffering because of it.
Our local fishermen never had to be conscious about [the amount they could catch] before. In order to stay in business, you want to take the biggest and most fish you can. When you take this traditional way of fishing and compound it with new catch regulations (and a perceived lack of communication from those enforcing the regulations), and more and bigger players fishing in the area, that’s how we ended up where we are today, with the fishing industry in crisis.
AY: What are “underutilized fish” (formerly called “trash fish”) and how could they help the industry and/or economy?
AT: In New England, there are certain types of fish that we have a lot of, but that just aren’t as popular as cod or haddock. There’s the dogfish shark, which is a shark but they are small – about
three-and-a-half to four feet in length. In Europe, they are commonly used in fish and chips. Here in New England, we have lots of it. So much so, that they are almost considered overpopulated, making it a great alternative for consumers, especially since whatever you can do with cod, you can do with dogfish.
AY: But it doesn’t have quite the same ring to it.
AT: Right. When people hear “shark” and “dogfish,” they don’t like that. But as soon as you tell them how to prepare it, and that it holds up well in the freezer, and it seasons well, and is cheap – that makes a difference.
AY: Are there other underutilized fish in New England?
AT: There’s the King Whiting, a type of Silver Hake. It’s a delectable, thick, firm white fish that’s high in protein and omega-3s. It’s good for grilling or sautéing, and the fillet is just as large as one from a cod or haddock. And there’s also the Monkfish, which is an incredibly scary-looking fish on the outside – and delicious on the inside. We hear it called the “poor man’s lobster.” It tastes just like lobster, but for a fraction of the price.
AY: How does a Community Supported Fishery work? Is this model feasible in other places?
AT: The way fishing in New England works now, most fishermen sell everything they catch all at once at an auction, instead of buying directly “off the boat.” So, as a Community Supported Fishery, or CSF, New Hampshire Community Seafood gives the fishermen an incentive – we’d give them, say, an extra $0.25 per pound of a certain fish that’s higher than what they would receive at an auction. For dogfish, it’s actually a $1.10 per pound incentive! A CSF is really the only way to buy off the boat now. We buy a small portion of what the local fishermen catch, but it’s something.
There are about 50 CSFs in the United States. On land, we’ve seen a growing popularity in supporting the local farmer, and this fits in well with that model. You pay up front, and get what’s ripe each week – it works the same way with fish. Community members can support local fishermen and the local economy in this way. So, the challenge is to get people to realize that underutilized fish are just as delicious as cod and haddock.
In New England in particular, when people hear that the fishing industry is in crisis, that affects them. Many who grew up here are enamored by our iconic fishing traditions – maybe they have good memories of fishing, or they feel that it’s a big part of the culture. When you add in the “locavore” mentality, as well as those who are trying to eat healthier, we see a real opportunity to appeal to a lot of people.
AY: So consumers can have a real impact here.
AT: Yes. The fish are there – all we need is more consumers and more buyers, and it can make a greater impact. We are also working with restaurants and chefs; they will buy underutilized fish and put it on the menu, creating more exposure and making it easier for consumers to try something new. Right now we are in 10 restaurants and a hospital cafeteria, and are continuing to expand.
AY: How can people get involved?
AT: We are mostly based in Portsmouth, NH, but our CSF has 17 pickup locations in New Hampshire, one in Northern Massachusetts, and we’re partnering with Monadnock food cooperative in Keene, NH. (All of these are listed on the New Hampshire Community Seafood website). We also have a newsletter that informs locals about what’s new, how to cook underutilized fish, recipes, and more.
AY: Anything else you would like to add?
AT: Three years ago, there were 26 local fishermen in New Hampshire, and now there are only 9 left. We buy fish from all of them. The industry is in desperate need of support, both from communities and from the NMFS [regulators].
In addition to community-supported fishing organizations like NHCS, the Gulf of Maine Research Institute’s Out of the Blue series aims to educate the public about abundant fish that are well-managed and are not harvested primarily due to low market demand.
And NOAA recently announced the public availability of fishwatch.gov, a resource that provides up-to-date information about fish, including the ability to look up a certain fish to see where it’s available, whether it’s a smart and sustainable option, nutrition information, and more.
Would you (or have you) tried dogfish, whiting, or monkfish? Leave a comment below!
If there’s one thing we can be sure of, it’s that New Englanders love lobster. It’s weaved into our culture and history, and it’s unimaginable to think of New England without this famed summer seafood.
Few know that lobsters were once so plentiful in New England that Native Americans used them as fertilizer for their fields, and as bait for fishing. And before trapping was common, “catching” a lobster meant picking one up along the shoreline!
During World War II, lobster was viewed as a delicacy, so it wasn’t rationed like other food sources. Lobster meat filled a demand for protein-rich sources, and continued to increase in popularity in post-war years, which encouraged more people to join the industry.
Popular ever since, now when most people are asked what comes to mind when they think of New England, seafood – especially lobster – is typically at the top of the list.
An industry under threat
We love our New England lobster, but there’s evidence suggesting they’re in danger of moving away from their longtime home. That’s because lobster is under threat from climate change, the effects of which can already be seen on this particular species.
The Gulf of Maine is warming faster than 99% of ocean areas. Until last winter’s uncharacteristically cold temperatures, the prior few years saw an increase in catchable lobster – as the warmer temperatures cause them to molt early, and they move toward inshore waters after molting. However, continued warming will ultimately encourage the lobsters to move north to find colder waters, where they spend the majority of their time.
This is already happening in southern New England, where the industry is already suffering, seeing lobsters migrating northward.
And we’re still learning about the potential for damage caused by ocean acidification, as well as how lobsters may be affected by an increase in colder than usual New England winters.
As we celebrate one of New England’s iconic species on National Lobster Day, let’s remember that slowing down climate change is an important priority for ensuring that future generations can enjoy not Canadian or Icelandic lobster, but New England lobster. Click here to support Conservation Law Foundation’s efforts on fighting climate change. | http://newenglandoceanodyssey.org/category/talking-fish/ |
4.125 | Constitution of Denmark
|This article needs additional citations for verification. (June 2014)|
|This article is part of a series on the
politics and government of
The Constitutional Act of Denmark (Danish: Danmarks Riges Grundlov) is the main part of the constitution of Kingdom of Denmark. First written in 1849, it establishes a sovereign state in the form of a constitutional monarchy, with a representative parliamentary system. The later sections of the Constitution guarantee fundamental human rights and lay out the duties of citizens. The current Constitution was signed on 5 June 1953 as "the existing law, for all to unswerving comply with, the Constitutional Act of Denmark".
Idea and structure
The main principle of the Constitution was to limit the monarch's power (section 2). The Constitution of 1849 established a bicameral parliament, the Rigsdag, consisting of the Landsting and the Folketing. It also secured civil rights, which remain in the current constitution, such as habeas corpus (section 71), private property rights (section 72) and freedom of speech (section 77).
The Constitution is based on the separation of powers into the three branches of government, the legislative, the executive and the judicial branches. The Constitution is heavily influenced by the French philosopher Montesquieu, whose separation of powers was aimed at achieving mutual monitoring of each of the branches of government. This is achieved through the Constitution's section 3, although the division between legislative and executive power is not as sharp as in the United States.
The original constitution of Denmark was signed on 5 June 1849 by King Frederick VII. The event marked the country's transition to constitutional monarchy, putting an end to the absolute monarchy that had been introduced in Denmark in 1660. The Constitution has been rewritten 4 times since 1849.
Before the first constitutions, the power of the king was tempered by a håndfæstning, a charter each king had to sign before being accepted as king by the land things. This tradition was abandoned in 1665 when Denmark got its first constitution Lex Regia (The Law of The King, Danish: Kongeloven) establishing absolute power for King Frederick III of Denmark, and replacing the old feudal system. This is Europe's only formal absolutist constitution. Absolute power was passed along with a succession of Danish monarchs until Frederick VII, who agreed to sign the new constitution into law on 5 June 1849, which has since been a Danish national holiday.
Frederick VII's immediate predecessor, his father Christian VIII, ruled Denmark from 1839 to 1848, and had been king of Norway until the political turmoil of 1814 forced him to abdicate after a constitutional convention. Those who supported similar constitutional reforms in Denmark were disappointed by his refusal to acknowledge any limitations to his inherited absolute power, and had to wait for his successor to put through the reforms.
Ditlev Gothard Monrad, who became Secretary in 1848, drafted the first copy of the Constitution, based on a collection of the constitutions of the time, sketching out 80 paragraphs, whose basic principles and structure resembles the current constitution. The language of the draft was later revised by Secretary Orla Lehmann among others, and treated in the Constitutional Assembly of 1848 (Danish: Grundlovsudvalget af 1848). Sources of inspiration for the Constitution include the Constitution of Norway of 1814 and the Constitution of Belgium. The constitution's civil rights are based on the Constitution of the United States of 1787, especially the Bill of Rights.
The government's draft was laid before the Constitutional Assembly of the Realm (Danish: Den Grundlovgivende Rigsforsamling), part of which had been elected on 5 October 1848, the remainder having been appointed by the King. The 152 members were mostly interested in the political aspects, the laws governing elections and the composition of the two chambers of Parliament. The Constitution was adopted during a period of strong national unity, namely the First Schleswig War, which lasted from 1848–1851.
The Danish constitution has been written five times, in 1849, 1866, 1915, 1920 and 1953. No Danish constitution has ever been amended; each time, a new constitution replaced the existing constitution.
According to section 88 of the 1953 Constitution, changes require a majority in two consecutive Parliaments: before and after a general election. In addition, the Constitution must pass a popular vote, with the additional demand that at least 40% of voting age population must vote in favour.
The Constitution sets out only the basic principles, with more detailed regulation left over to the legislative branch of government, currently the Danish parliament Folketinget.
The four changes can be summed up as follows:
- In 1866, the defeat in the Second Schleswig War, and the loss of Schleswig-Holstein led to tightened election rules for the Upper Chamber, which paralyzed legislative work, leading to provisional laws.
The conservative Højre had pressed for a new constitution, giving the upper chamber of parliament more power, making it more exclusive and switching power to the conservatives from the original long standing dominance of the National liberals, who lost influence and was later disbanded. This long period of dominance of the Højre party under the leadership of Jacob Brønnum Scavenius Estrup with the backing of the king Christian IX of Denmark was named the provisorietid (provisional period) because the government was based on provisional laws instead of parliamentary decisions. This also gave rise to a conflict with the Liberals (farm owners) at that time and now known as Venstre (Left). This constitutional battle concluded in 1901 with the so-called systemskifte (change of system) with the liberals as victors. At this point the king and Højre finally accepted parliamentarism as the ruling principle of Danish political life. This principle was not codified until the 1953 constitution.
- In 1915, the tightening from 1866 was reversed, and women were given the right to vote. Also, a new requirement for changing the constitution was introduced. Not only must the new constitution be passed by two consecutive parliaments, it must also pass a referendum, where 45% of the electorate must vote yes. This meant that Prime Minister Thorvald Stauning's attempt to change the Constitution in 1939 failed.
- In 1920, a new referendum was held to change the Constitution again, allowing for the reunification of Denmark following the defeat of Germany in World War I. This followed a referendum held in the former Danish territories of Schleswig-Holstein regarding how the new border should be placed. This resulted in upper Schleswig becoming Danish, today known as Southern Jutland, and the rest remained German.
- In 1953, the fourth constitution abolished the Upper Chamber (the Landsting), giving Denmark a unicameral parliament. It also enabled females to inherit the throne (see Succession), but the change still favored boys over girls (this was changed by a referendum in 2009 so the first-born inherits the throne regardless of sex). Finally, the required number of votes in favor of a change of the Constitution was decreased to the current value of 40% of the electorate.
The Constitution of Denmark outlines certain human rights in sections 71–80. Several of these are of only limited scope and thus serve as a sort of lower bar. The European Convention on Human Rights was introduced in Denmark by law on 29 April 1992 and supplements the mentioned paragraphs.
Symbolic status of the king
When reading the Danish Constitution, it is important to bear in mind that the King is meant to be read as the government because of the monarch's symbolic status. This is a consequence of sections 12 and 13, by which the King executes his power through his ministers, who are responsible for governing. An implication of these sections is that the monarch cannot act alone in disregard of the ministers, so the Danish monarch does not interfere in politics.
Section 4 establishes that the Evangelical Lutheran Church is "the people's church" (folkekirken), and as such is supported by the state. Freedom of religion is granted in section 67, and official discrimination based on faith is forbidden in section 70. Christianity is a major religion
Section 20 of the current constitution establishes that specified parts of national sovereignty can be delegated to international authorities if the Parliament or the electorate votes for it. This section has been debated heavily in connection with Denmark's membership of the European Union, as critics hold that changing governments have violated the Constitution by surrendering too much power.
In 1996, Prime Minister Poul Nyrup Rasmussen was sued by 12 euroskeptics for violating this section. The Danish Supreme Court (Danish: Højesteret) acquitted Rasmussen (and thereby earlier governments dating back to 1972) but reaffirmed that there are limits to how much sovereignty can be surrendered.
Other constitutional laws of Denmark
The Danish constitution contains these additional parts:
- The parts of Kongeloven, the former absolute monarchist constitution from 1665, that were not superseded.
- The Act of Succession to the Danish Throne of 27 March 1953 also has status as a constitutional law, as it is directly referred to in Article 2 of the Constitutional Act. Therefore, amendments to the Act of Succession require adherence to the constitutional amendment procedure as provided for in Article 88 of the Danish Constitution Act. An amendment to abolish male preference to the throne (bill no. 1, Folketing session of 2005–06) was passed by a referendum in 2009.
- To an extent the laws granting self government to the Faroe Islands and Greenland can be considered constitutional.
- Certain particular customs, not explicitly referred to in the Constitutional Act itself, have been recognised as carrying constitutional legal weight (such as the right of the Finance Committee to authorise public expenditure outside of the national budget), also form part of Danish Constitutional law.
- Constitutional law
- Constitutional economics
- Index of Denmark-related articles
- Outline of Denmark
- Danish Realm
- "CIA World Factbook: Denmark: Government". Retrieved 8 July 2009.
- Constitutional Act of Denmark, 5 June 1953 (WikiSource)
- Folketinget Archived 7 May 2009 at the Wayback Machine
- Grundloven 1849 by Erik Strange Petersen Aarhus University in danish
- "Chronology". Constitute. Retrieved 29 April 2015.
- CIA – The World Factbook
- Søren Mørch: 24 statsministre. ISBN 87-02-00361-9.
- Grundloven, Mikael Witte 1997 ISBN 87-7724-672-1
- The Constitutional Act of Denmark | Folketinget (Danish Parliament)
|Wikisource has original text related to this article:| | https://en.wikipedia.org/wiki/Constitution_of_Denmark |
4.21875 | You Are Here
Activity 1: Forgiveness in History
Activity time: 15 minutes
Materials for Activity
- Leader Resource 1, Truth and Reconciliation Match Ups
- Leader Resource 2, Histories
- Basket or box
Preparation for Activity
- Cut apart the names in Leader Resource 1, Truth and Reconciliation Match Ups. Put the slips of paper into a basket or box.
- Cut apart the historical data in Leader Resource 2, Histories. Be prepared to distribute the individual case histories to volunteers for reading.
Description of Activity
Youth look at forgiveness on a large scale: nations or organizations seeking forgiveness for oppression of a group of people.
Ask for a volunteer to look up the word "forgive" in the dictionary and read the definition to the group.
Say, in your own words:
We generally offer an apology to someone when we are seeking forgiveness or a pardon. This can be done by individuals. However, sometimes groups, even nations, issue apologies for wrongs committed against an entire group of people. Sometimes the apology is a long time coming. Sometimes, it includes reparations, which are payments for an injury or a wrong.
Show the group the basket with the names from Leader Resource 1, Truth and Reconciliation Match Ups. Tell them that they are going to play a matching game. Everyone should take a slip of paper that has the name of one party of an apology. They need to find their counterpart. They will do this by asking other youth "yes and no questions" until they believe they have found their match.
Assist youth as needed. After everyone has correctly found a match, ask for volunteers to read the case histories from Leader Resource 2, Histories.
Lead a group discussion with questions such as:
- Do you think every member of the oppressed group accepts the apologies? Why or why not?
- How would you feel if a nation or organization issued an apology, but no reparations or other efforts to try to repair the damage?
- How would you feel if the nation or organization offered reparation, but did not accept wrongdoing or offer an apology?
- Why do you think the responsible parties are hesitant to accept responsibility or offer reparations?
- Can you think of other cases where a government has addressed its previous wrongdoing?
Including All Participants
Be aware that youth who identify as a member of an oppressed group covered in the histories might be in the room. If you think this youth might find the activity difficult, delete that case history. However, do not assume that will be the case. Use your judgment, based on the experiences you have shared with the youth so far. You might also ask the youth beforehand. | http://www.uua.org/re/tapestry/youth/call/workshop11/173101.shtml |
4.25 | The Treaty of Paris (see Paris, Treaty of) formally recognized the new nation in 1783, although many questions were left unsettled. The United States was floundering through a postwar depression and seeking not too successfully to meet its administrative problems under the Articles of Confederation (see Confederation, Articles of).
The leaders in the new country were those prominent either in the council halls or on the fields of the Revolution, and the first three Presidents after the Constitution of the United States was adopted were Washington, Adams, and Jefferson. Some of the more radical Revolutionary leaders were disappointed in the turn toward conservatism when the Revolution was over, but liberty and democracy had been fixed as the highest ideals of the United States.
The American Revolution had a great influence on liberal thought throughout Europe. The struggles and successes of the youthful democracy were much in the minds of those who brought about the French Revolution, and most assuredly later helped to inspire revolutionists in Spain's American colonies.
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: U.S. History | http://www.infoplease.com/encyclopedia/history/american-revolution-aftermath.html |
4.25 | Forests play an important an important role in climate change. The destruction and degradation of forests contributes to the problem through the release of CO2. But the planting of new forests can help mitigate against climate change by removing CO2 from the atmosphere. Combined with the sun's energy, the captured carbon is converted into trunks, branches, roots and leaves via the process of photosynthesis. It is stored in this "biomass" until being returned back into the atmosphere, whether through natural processes or human interference, thus completing the carbon cycle.
Tree planting and plantation forestry are well established both in the private and public sectors. The most recent data released by the UN's Food and Agriculture Organisation suggest that plantation forests comprised an estimated 7% of global forest area in 2010. Most of these forests were established in areas that were previously not under forest cover, at least in recent years. Trees are also planted as part of efforts to restore natural forests as well as in agroforestry, which involves increasing tree cover on agricultural land and pastures.
Under certain conditions plantations can grow relatively fast, thus absorbing CO2 at higher rates than natural forests. In the absence of major disturbances, newly planted or regenerating forests can continue to absorb carbon for 20–50 years or more. In comparison to preventing the loss of natural forests, however, tree planting has the potential to make only a limited contribution to reducing CO2 levels in the atmosphere. In 2000, the IPCC gathered the available evidence for a special report which concluded that tree-planting could sequester (remove from the atmosphere) around 1.1–1.6 GT of CO2 per year. That compares to total global greenhouse gas emissions equivalent to 50 GT of CO2 in 2004.
Unlike measures to reduce deforestation, tree planting and reforestation were included as activities eligible for finance under the Kyoto protocol. Kyoto's rules and procedures, however, restricted the scale and scope of these activities. As a result, projects have struggled to get off the ground and the carbon sequestered has been almost negligible. Outside of Kyoto, some tree-planting projects established to absorb CO2 have turned out to be nonviable due to the cost of acquiring inputs or protecting young trees from fire, drought, pests or diseases. The cost of land is another barrier to widespread tree-planting, especially where there is competition with other land uses such as food or biofuel production.
As negotiations over the future of Kyoto continue, the extent of the possible role of tree planting in a future climate change framework remains unclear. Tree planting is, however, unlikely to be implemented on a scale to reach even the relatively modest potential contribution outlined by the IPPC – especially in the absence of a high carbon price.
• This article was written by Dr Charles Palmer of the Grantham Research Institute on Climate Change and the Environment at LSE in collaboration with the Guardian
The ultimate climate change FAQ
This editorial is free to reproduce under Creative Commons
This post by The Guardian is licensed under a Creative Commons Attribution-No Derivative Works 2.0 UK: England & Wales License.
Based on a work at theguardian.com | http://www.theguardian.com/environment/2012/nov/29/planting-trees-climate-change?view=mobile |
4.0625 | Swedish is descended from Old Norse. Compared to its progenitor, Swedish grammar is much less characterized by inflection. Modern Swedish has two genders and no longer conjugates verbs based on person or number. Its nouns have lost the morphological distinction between nominative and accusative cases that denoted grammatical subject and object in Old Norse in favor of marking by word order. Swedish uses some inflection with nouns, adjectives, and verbs. It is generally a subject–verb–object (SVO) language with V2 word order.
- 1 Nouns
- 2 Pronouns
- 3 Adjectives
- 4 Comparison
- 5 Numerals
- 6 Verbs
- 7 Adverbs
- 8 Prepositions
- 9 Syntax
- 10 Notes
- 11 References
- 12 External links
Nouns have two grammatical genders: common (utrum) and neuter (neutrum), which determine their definite forms as well as the form of any adjectives used to describe them. Noun gender is largely arbitrary and must be memorized; however, around three quarters of all Swedish nouns are common gender. Living beings are often common nouns, like in en katt, en häst, en fluga, etc.
Swedish once had three genders—masculine, feminine and neuter. Though traces of the three-gender system still exist in archaic expressions and certain dialects, masculine and feminine nouns have today merged into the common gender. A remnant of the masculine gender can still be expressed in the singular definite form of adjectives according to natural gender (male humans), in the same way as personal pronouns, han/hon, are chosen for representing nouns in Contemporary Swedish (male/female humans and optionally animals).
There are traces of the former four-case system for nouns evidenced in that pronouns still have a subject, object (based on the old accusative and dative form) and genitive forms. Nouns make no distinction between subject and object forms, and the genitive is formed by adding -s to the end of a word. This -s genitive functions more like a clitic than a proper case and is nearly identical to the possessive suffix used in English. Note, however, that in Swedish this genitive s is appended directly to the word and is not preceded by an apostrophe.
Swedish nouns are inflected for number and definiteness and can take a genitive suffix. They exhibit the following morpheme order:
|Noun stem||(Plural)||(Definite article)||(Genitive -s)|
Nouns form the plural in a variety of ways. It is customary to classify Swedish nouns into five declensions based on their plural indefinite endings: -or, -ar, -er, -n, and unchanging nouns.
- Nouns of the first declension are all of the common gender. The majority of these nouns end in -a in the singular and replace it with -or in the plural. For example: en flicka (a girl), flickor (girls). A few nouns of the first declension end in a consonant, such as: en våg (a wave), vågor (waves); en ros (a rose), rosor (roses).
- Nouns of the second declension are also of the common gender, with the exception of finger (finger). They all have the plural ending -ar. Examples include: en arm (an arm), armar (arms); en hund (a dog), hundar (dogs); en sjö (a lake), sjöar (lakes); en pojke (a boy), pojkar (boys); en sjukdom (an illness), sjukdomar (illnesses); en främling (a stranger), främlingar (strangers). A few second declension nouns have irregular plural forms, for instance: en afton (an evening), aftnar (evenings); en sommar (a summer), somrar (summers).
- The third declension includes both common and neuter nouns. The plural ending for nouns of this declension is -er or, for some nouns ending in a vowel, -r. For example: en park (a park), parker (parks); ett museum (a museum), museer (museums); en sko (a shoe), skor (shoes); en fiende (an enemy), fiender (enemies). Some third declension nouns modify their stem vowels in the plural: en hand (a hand), händer (hands); ett land (a country), länder (countries); en bok (a book), böcker (books).
- All nouns in the fourth declension are of the neuter gender and end in a vowel in the singular. Their plural ending is -n. For example: ett bi (a bee), bin (bees); ett äpple (an apple), äpplen (apples). Two nouns in this declension have irregular plural forms: ett öga (an eye), ögon (eyes); ett öra (an ear), öron (ears).
- Fifth declension nouns have no plural ending and they can be of common or neuter gender. Examples of these include: ett barn (a child), barn (children); ett djur (an animal), djur (animals); en lärare (a teacher), lärare (teachers). Some fifth declension nouns show a vowel change in the plural: en mus (a mouse), möss (mice); en gås (a goose), gäss (geese); en man (a man), män (men).
Articles and definite forms
The definite article in Swedish is mostly expressed by a suffix on the head noun, while the indefinite article is a separate word preceding the noun. This structure of the articles is shared by the Scandinavian languages. Articles differ in form depending on the gender and number of the noun.
The indefinite article, which is only used in the singular, is "en" for common nouns, and "ett" for neuter nouns, e.g. en flaska (a bottle), ett brev (a letter). The definite article in the singular is generally the suffixes "-en" or "-n" for common nouns (e.g. flaskan "the bottle"), and "-et" or "-t" for neuter nouns (e.g. brevet "the letter"). The definite article in the plural is "-na", "-a" or "-en", depending on declension group, for example flaskorna (the bottles), breven (the letters).
When an adjective or numeral is used in front of a noun with the definite article, an additional definite article is placed before the adjective(s). This additional definite article is det for neuter nouns, den for common nouns, and de for plural nouns, e.g. den nya flaskan (the new bottle), det nya brevet (the new letter), de fem flaskorna (the five bottles). A similar structure involving the same kind of circumfixing of the definite article with the words där (there) or här (here) is used to mean "this" and "that", e.g. den här flaskan (this bottle), det där brevet (that letter) as a demonstrative article.
The five declension classes may be named -or, -ar, -er, -n, and null after their respective plural indefinite endings. Each noun has eight forms: singular/plural, definite/indefinite and caseless/genitive. The caseless form is sometimes referred to as nominative, even though it is used for grammatical objects as well as subjects.
The genitive is always formed by appending -s to the caseless form. In the second, third and fifth declensions words may end with an -s already in the caseless form. These words take no extra -s in genitive use: the genitive (indefinite) of hus ("house") is hus. Morpheme boundaries in some forms may be analyzed differently by some scholars.
The Swedish genitive is not considered a case by all scholars today,[who?] as the -s is usually put on the last word of the noun phrase even when that word is not the head noun, mirroring English usage (e.g. Mannen som står där bortas hatt. "The man standing over there's hat."). This use of -s as a clitic rather than a suffix has traditionally been regarded as ungrammatical, but are today dominant to the point where putting an -s on the head noun is considered old fashioned. The Swedish Language Council recommends putting the ending after the phrase, except when making temporary constructions, where one should instead try to reformulate.
These examples cover all regular Swedish caseless noun forms.
First declension: -or (common gender)
Second declension: -ar (common gender)
Third declension: -er, -r (mostly common gender nouns, some neuter nouns)
Words taking only -r as a marker for plural is regarded as a declension of its own by some scholars. However, traditionally these have been regarded as a special version of the third declension.
Fourth declension: -n (neuter) This is when a neuter noun ends in a vowel.
Fifth declension: unmarked plural (mostly neuter nouns that don't end in vowels and common gender nouns ending in certain derivation suffixes)
The Swedish personal pronoun system is almost identical to that of English. Pronouns inflect for person, number, and, in the third person singular, gender. Swedish is different, inter alia, as it has a separate third-person reflexive pronoun sig (himself, herself, itself, themselves) analogous to French se, and distinct 2nd person singular forms du ("thou") and ni ("you", formal/respectful), and their objective forms, which have all merged to "you" in English, while the third person plurals are becoming merged in Swedish instead[clarification needed]. Some aspects of personal pronouns are simpler in Swedish: reflexive forms are not used for the first and second person, although själv ("self") and egen/eget/egna ("own") may be used for emphasis, and there are no absolute forms for the possessive.
The Swedish personal pronouns are:
|Person||Nominative||Objective||Possessive: com./neut./pl.||Person||Nominative||Objective||Possessive: com./neut./pl.|
|2 (familiar)||du||dig||din/ditt/dina1||2 (formal: sg. or pl.)||ni3||er||er/ert/era1
|3 Gen-Neu. (neologism)||hen4||hen/henom4||hens4|
|3 Indef.||man ("one", Fr. "on")||en||ens|
|(3 Refl.)||—||sig||sin/sitt/sina1||(3 Refl.)||—||sig||sin/sitt/sina1|
1These possessive pronouns are inflected similarly to adjectives, agreeing in gender and number with the item possessed. The other possessive pronouns (i.e. those listed without slashes) are genitive forms that are unaffected by the item possessed.
2de (they) and dem (them) are both usually pronounced "dom" (/dɔm/) in colloquial speech, while in formal speech, "dom" may optionally replace just "dem". In some dialects (especially Finnish ones) there is still a separation between the two; de is then commonly pronounced /di/. Also, mig, dig, sig are pronounced as if written "mej", "dej", "sej", and are also sometimes spelled that way in less formal writing or to signal spoken language, but this is not appreciated by everyone.
3ni is derived from an older pronoun I, "ye", for which verbs were always conjugated with the ending -en. I became ni when this conjugation was dropped; thus the n was moved from the end of the verb to the beginning of the pronoun.
4hen and its inflections are neologisms: they are gender-neutral pronouns used by some to avoid a preference for female or male, when a person's gender is not known, or to refer to people whose gender is not defined as female or male. They are relatively new in widespread use, but since 2010 have appeared frequently in traditional and online media, legal documents, and literature. The use of these words has prompted a political and linguistic debate in Sweden, and their use is not universally accepted by Swedish speakers.
Demonstrative, interrogative, and relative pronouns
- including related words not strictly considered pronouns
- den här, det här, de här: this, these (may qualify a noun in the definite form.)
- den där, det där, de där: that, those (may qualify a noun in the definite form.)
- denne/denna/detta/dessa: this/these (may qualify a noun in the indefinite form.)
- som: as, that, which, who (strictly speaking, a subordinating conjunction rather than a pronoun, som is used as an all-purpose relative pronoun whenever possible in Swedish.)
- vem: who, whom (interrogative)
- vilken/vilket/vilka: which, what, who, whom, that
- vad: what
- vems: whose (interrogative)
- vars: whose (relative)
- när: when
- då1: then, when (relative)
- här, där, var1: here, there, where (also form numerous combinations such as varifrån, "where from", and därav, "thereof".)
- hit, dit, vart1: hither, thither, whither (not archaic as in English)
- vem som helst, vilket som helst, vad som helst, när som helst, var som helst: whoever, whichever, whatever, whenever, wherever, etc.
- hädan, dädan, vadan, sedan1: hence, thence, whence, since (The contractions hän and sen are common. These are all somewhat archaic and formal-sounding except for sedan.)
- någon/något/några, often contracted to and nearly always said as nån/nåt/nåra2: some/any, a few; someone/anyone, somebody/anybody, something/anything (The distinction between "some" in an affirmative statement and "any" in a negative or interrogative context is actually a slight difficulty for Swedes learning English.)
- ingen/inget/inga2: no, none; no one, nobody, nothing
- annan/annat/andra: other, else
- någonstans, ingenstans, annanstans, överallt: somewhere/anywhere, nowhere, elsewhere, everywhere; (more formally någonstädes, ingenstädes, annorstädes, allestädes)
- någorlunda, ingalunda, annorlunda: somehow/anyhow, in no wise, otherwise
- någonting, ingenting, allting: something/anything, nothing, everything
1 då, där, dit, and dädan, (then, there, thither, and thence,) and any compounds derived from them are used not only in a demonstrative sense, but also in a relative sense, where English would require the "wh-" forms when, where, whither and whence.
2 Animacy is implied by gender in these pronouns: non-neuter implies a person (-one or -body) and neuter implies a thing.
Swedish adjectives are declined according to gender, number, and definiteness of the noun.
In singular indefinite, the form used with nouns of the common gender is the undeclined form, but with nouns of the neuter gender a suffix -t is added. In plural indefinite an -a suffix is added irrespective of gender. This constitutes the strong adjective inflection, characteristic of Germanic languages:
|Common||en stor björn, a large bear||stora björnar, large bears|
|Neuter||ett stort lodjur, a large lynx||stora lodjur, large lynxes|
In standard Swedish, adjectives are inflected according to the strong pattern, by gender and number of the noun, in complement function with är, is, such as
- lodjuret är skyggt, the lynx is shy, and
- björnarna är bruna, the bears are brown.
In some dialects of Swedish, the adjective is uninflected in complement function with är, so becoming:
- lodjuret är skygg, the lynx is shy, and
- björnarna är brun, the bears are brown.
In definite form we instead have a weak adjective inflection, originating from a Proto-Germanic nominal derivation of the adjectives. The adjectives now invariably take on an -a suffix irrespective of case and number, which was not always the case, cf. Proto-Germanic adjectives:
|Common||den stora björnen, the large bear||de stora björnarna, the large bears|
|Neuter||det stora lodjuret, the large lynx||de stora lodjuren, the large lynxes|
As the sole exception to this -a suffix is that naturally masculine nouns (replaceable with han/honom) take the -e ending in singular. Colloquially, however, the usual -a-ending is possible in these cases in some Swedish dialects:
|den store mannen, the large man||de stora männen, the large men|
|den stora mannen, the large man|
Adjectives with comparative and superlative forms ending in -are and -ast, which is a majority, also, and so by rule, use the -e suffix for all persons on definite superlatives: den billigaste bilen ("the cheapest car"). Another instance of -e for all persons is the plural forms and definite forms of adjectival verb participles ending in -ad: en målad bil ("a painted car") vs. målade bilar ("painted cars") and den målade bilen ("the painted car").
The cardinal numbers from zero to twelve in Swedish are:
The number 1 is the same as the indefinite article, and its form (en/ett) depends on the gender of the noun that it modifies.
The Swedish numbers from 13 to 19 are:
The form aderton is archaic, and is nowadays only used in poetry and some official documents. It is still common in Finland Swedish.
The numbers for multiples of ten from 20 to 1000 are:
|tjugo||trettio||fyrtio||femtio||sextio||sjuttio||åttio||nittio||(ett) hundra||(ett) tusen|
When trettio (30), fyrtio (40), femtio (50), sextio (60), sjuttio (70), åttio (80), nittio (90) are compounded with another digit, they form a compound number.
In some dialects, numbers are not always pronounced the way they are spelled. With the numbers nio (9), tio (10) and tjugo (20), the -o is often pronounced as an -e, e.g. /tjuge/. In some northern dialects the -o is pronounced as a /-u/, /tjugu/, and in some middle dialects the -o is pronounced as an /-i/, /tjugi/. In spoken language, tjugo usually drops the final syllable when compounded with another digit and is pronounced as /tju/ + the digit, e.g. tjugosju (27) may be pronounced /tjusju/. Words ending in -io (trettio, fyrtio, etc.) are most often pronounced without the final -o. The "y" in fyrtio (40) is always pronounced as an /ö/.
The ett preceding hundra (100) and tusen (1000) is optional, but in compounds it is usually required.
Higher numbers include:
|1 000 000||en miljon|
|10 000 000||tio miljoner|
|100 000 000||(ett) hundra miljoner|
|1 000 000 000||en miljard ¹|
¹ Swedish uses the long scale for large numbers.
The cardinal numbers from miljon and larger are true nouns and take the -er suffix in the plural. They are separated in written Swedish from the preceding number.
Any number can be compounded by simply joining the relevant simple cardinal number in the same order as the digits are written. Written with digits, a number is separated with a space between each third digit from the right. The same principle is used when a number is written with letters, although using letters becomes less common the longer the number is. However, round numbers, like tusen, miljon and miljard are often written with letters as are small numbers (below 20).
Numbers between 21-99 are written in the following format: (big number)(small number) example: 63 - "sextiotre" 48 - "fyrtioåtta" (note that the a is taken out between numbers 40-49) although 30-39 are slightly special, an extra t is added to these numders: 31 - "trettioett" 33 - "trettiotre"
|Written form||In components (do not use in written Swedish)|
|21||tjugoett / tjugoen||(tjugo-ett) / (tjugo-en)|
|1 975||ettusen niohundrasjuttifem
|10 874||tiotusen åttahundrasjuttifyra
|100 557||etthundratusen femhundrafemtisju
|1 378 971||en miljon trehundrasjuttiåtta tusen niohundrasjuttiett
en miljon trehundrasjuttioåtta tusen niohundrasjuttioett
|(en miljon tre-hundra-sjuttio-åtta tusen nio-hundra-sjuttio-ett)|
The decimal point is written as "," (comma) and written and pronounced komma. The digits following the decimal point may be read individually or as a pair if there are only two. When dealing with monetary amounts (usually with two decimals), the decimal point is read as och, i.e. "and": 3,50 (tre och femtio), 7,88 (sju och åttioåtta).
Rational numbers are read as the cardinal number of the numerator followed by the ordinal number of the denominator compounded with -del or -delar (part(s)). If the numerator is more than one, logically, the plural form of del is used. For those ordinal numbers that are three syllables or longer and end in -de, that suffix is usually dropped in favour of the de in -del. There are a few exceptions.
|1⁄2||en halv, one half|
|1⁄8||en åttondel or en åttondedel|
|8⁄9||åtta niondelar or åtta niondedelar|
|1⁄10||en tiondel or en tiondedel|
|1⁄13||en trettondel or en trettondedel|
|1⁄14||en fjortondel or en fjortondedel|
|1⁄15||en femtondel or en femtondedel|
|1⁄16||en sextondel or en sextondedel|
|1⁄17||en sjuttondel or en sjuttondedel|
|1⁄18||en artondel or en artondedel|
|1⁄19||en nittondel or en nittondedel|
|1⁄20||en tjugondel or en tjugondedel|
First to twelfth:
Thirteen to nineteen:
- As cardinal numerals, but with the suffix -de, e.g., trettonde (13:e), fjortonde (14:e).
Even multiples of ten (20th to 90th):
- As cardinal numerals, but with the suffix -nde, e.g., tjugonde (20:e), trettionde (30:e)
- As cardinal numerals, but with the suffix -de, e.g., hundrade (100:e, hundredth), tusende (1000:e, thousandth)
- As cardinal numerals, but with the suffix -te, e.g., miljonte (millionth). There is no ordinal for "miljard" (billion).
Verbs do not inflect for person or number in modern standard Swedish. They inflect for the present and past tense and imperative, subjunctive, and indicative mood. Other tenses are formed by combinations of auxiliary verbs with infinitives or a special form of the participle called the "supine". In total there are 6 spoken active-voice forms for each verb: infinitive, imperative, present, preterite/past, supine, and past participle. The only subjunctive form used in everyday speech is vore, the past subjunctive of vara ("to be"). It is used as one way of expressing the conditional ("would be", "were"), but is optional. Except for this form, subjunctive forms are considered archaic.
Verbs may also take the passive voice. The passive voice for any verb tense is formed by appending -s to the tense. For verbs ending in -r, the -r is first removed before the -s is added. Verbs ending in -er often lose the -e- as well, other than in very formal style: stärker ("strengthens") becomes stärks or stärkes ("is strengthened") (exceptions are monosyllabic verbs and verbs where the root ends in -s). Swedish uses the passive voice more frequently than English.
Swedish verbs are divided into four groups:
|1||regular -ar verbs|
|2||regular -er verbs|
|3||short verbs, end in -r|
|4||strong and irregular verbs, end in -er or -r|
About 80% of all verbs in Swedish are group 1 verbs, which is the only productive verb group. Swenglish variants of English verbs can be made by adding -a to the end of an English verb, sometimes with minor spelling changes. The verb is then treated as a group 1 verb. Examples of modern loan words within the IT field are chatta and surfa. Swenglish variants from the IT field that may be used but are not considered Swedish include maila, mejla ([ˈmejˌla], to email or mail) and savea, sejva ([ˈsejˌva] to save).
The stem of a verb is based on the present tense of the verb. If the present tense ends in -ar, the -r is removed to form the stem, e.g., kallar → kalla-. If the present tense ends in -er, the -er is removed, e.g., stänger → stäng-. For short verbs, the -r is removed from the present tense of the verb, e.g., syr → sy-. The imperative is the same as the stem.
For group 1 verbs, the stem ends in -a, the infinitive is the same as the stem, the present tense ends in -r, the past tense in -de, the supine in -t, and the past participle in -d, -t, and de.
For group 2 verbs, the stem ends in a consonant, the infinitive ends in -a, and the present tense in -er. Group 2 verbs are further subdivided into group 2a and 2b. For group 2a verbs, the past tense ends in -de and the past participle in -d, -t, and -da. For group 2b verbs, the past tense ends in -te and the past participle in -t, -t, and -ta. This is in turn decided by whether the stem ends in a voiced or a voiceless consonant. E.g. The stem of Heta (to be called) is het, and as t is a voiceless consonant the past tense ends in -te, making hette the past tense. If the stem ends in a voiced consonant however, as in Stör-a (to disturb), the past tense ends in -de making störde the past tense.
For group 3 verbs, the stem ends in a vowel that is not -a, the infinitive is the same as the stem, the present tense ends in -r, the past tense in -dde, the supine in -tt, and the past participle in -dd, -tt, and -dda.
Group 4 verbs are strong and irregular verbs. Many commonly used verbs belong to this group. For strong verbs, the vowel changes for the past and often the supine, following a definite pattern, e.g., stryka is a strong verb that follows the u/y, ö, u pattern (see table below for conjugations). Irregular verbs, such as vara (to be), are completely irregular and follow no pattern. As of lately, an increasing number of verbs formerly conjugated with a strong inflection has been subject to be conjugated with its weak equivalent form in colloquial speech.
|to strike out
|4 (irregular)||var-||var!||vara||är||var||varit||-||to be|
*often new vowel
Examples of tenses with English translations
|Infinitive||To work||(Att) arbeta|
|Present Tense||I work||Jag arbetar|
|Past Tense, Imperfect Aspect||I worked||Jag arbetade|
|Past Tense, Perfect Aspect||I have worked||Jag har arbetat|
|Future Tense, Futurum Simplex||I will work||Jag ska arbeta|
The irregular verb gå
|Infinitive||To walk||(Att) gå|
|Present Tense||I walk||Jag går|
|Past Tense, Imperfect Aspect||I walked||Jag gick|
|Past Tense, Perfect Aspect||I have walked||Jag har gått|
|Future Tense, Futurum Simplex||I will walk||Jag ska gå|
As in all Germanic languages, strong verbs change their vowel sounds in the various tenses. For most Swedish strong verbs that have a verb cognate in English or German, that cognate is also strong. For example, "to bite" is a strong verb in all three languages as well as Dutch:
|Swedish||bita||jag biter||jag bet||jag har bitit||biten, bitet, bitna|
|Dutch||bijten||ik bijt||ik beet||ik heb gebeten||gebeten|
|German||beißen||ich beiße||ich biss||ich habe gebissen||gebissen|
|English||to bite||I bite||I bit||I have bitten||bitten|
The supine (supinum) form is used in Swedish to form the composite past form of a verb. For verb groups 1-3 the supine is identical to the neuter form of the past participle. For verb group 4, the supine ends in -it while the past participle's neuter form ends in -et. Clear pan-Swedish rules for the distinction in use of the -et and -it verbal suffixes were missing though before the first official Swedish Bible translation, completed 1541.
This is best shown by example:
- Simple past: I ate (the) dinner - Jag åt maten (using preterite)
- Composite past: I have eaten (the) dinner - Jag har ätit maten (using supine)
- Past participle common: (The) dinner is eaten - Maten är äten (using past participle)
- Past participle neuter: (The) apple is eaten - Äpplet är ätet
- Past participle plural: (The) apples are eaten - Äpplena är ätna
The supine form is used after ha (to have). In English this form is normally merged with the past participle, or the preterite, and this was formerly the case in Swedish, too (the choice of -it or -et being dialectal rather than grammatical); however, in modern Swedish, they are separate, since the distinction of -it being supine and -et being participial was standardised.
The passive voice in Swedish is formed in one of four ways:
- add an -s to the infinitive form of the verb
- use a form of bli (become) + the perfect participle
- use a form of vara (be) + the perfect participle
- use a form of få (get) + the perfect participle
Of the first three forms, the first (s-passive) tends to focus on the action itself rather than the result of it. The second (bli-passive) stresses the change caused by the action. The third (vara-passive) puts the result of the action in the centre of interest:
- Dörren målas. (Someone paints the door right now.)
- Dörren blir målad. (The door is being painted, in a new colour or wasn't painted before.)
- Dörren är målad. (The door is painted.)
The fourth form is different from the others, since it is analogous to the English "get-passive": Han fick dörren målad (He got the/his door painted). This form is used when you want to use a subject other than the "normal" one in a passive clause. In English you could say: "The door was painted for him", but if you want "he" to be the subject you need to say "He got the door painted." Swedish uses the same structure.
The subjunctive mood is rarely used in modern Swedish and is mostly limited to fixed expressions like leve kungen, "long live the king". Present subjunctive is formed by adding the "-e" ending to the stem of a verb:
|att tala, to speak||talar, speak(s)||tale, may speak|
|att bliva, to become||bli(ve)r, become(s)||blive, may become|
|att skriva, to write||skriver, write(s)||skrive, may write|
|att springa, to run||springer, run(s)||sprunge, may run|
Adjectival adverbs are formed by putting the adjective in neuter singular form. Adjectives ending in -lig may take either the neuter singular ending or the suffix -en, and occasionally -ligen is added to an adjective not already ending in -lig.
|tjock, thick||tjockt, thick||tjockt, thickly|
|snabb, fast||snabbt, fast||snabbt, fast|
|avsiktlig, intentional||avsiktligt, intentional||avsiktligt, avsiktligen, intentionally|
|stor, great, large||stort, great, large||storligen, greatly
i stort sett, largely
Adverbs of direction in Swedish show a distinction that is lacking in English: some have different forms exist depending on whether one is heading that way, or already there. For example:
- Jag steg upp på taket. Jag arbetade där uppe på taket.
- I climbed up on the roof. I was working up there on the roof.
|Heading that way||Already there||English|
Unlike in more conservative Germanic languages (e.g. German), putting a noun into a prepositional phrase doesn't alter its inflection, case, number or definiteness in any way, except for in a very small number of set phrases.
Prepositions of location
|på||on||Råttan dansar på bordet.||The rat dances on the table.|
|under||under||Musen dansar under bordet.||The mouse dances under the table.|
|i||in||Kålle arbetar i Göteborg.||Kålle works in Gothenburg.|
|vid||by||Jag är vid sjön.||I am by the lake.|
|till||to||Ada har åkt till Göteborg.||Ada has gone to Gothenburg.|
Prepositions of time
|på||at||Vi ses på rasten.||See you at the break.|
|före||before||De var alltid trötta före rasten.||They were always tired before the break.|
|om||in||Kan vi ha rast om en timme?||May we have a break in one hour?|
|i||for||Kan vi ha rast i en timme?||May we have a break for one hour?|
|på||for (in a negative statement)||Vi har inte haft rast på två timmar.||We have not had a break for two hours.|
|under||during||Vi arbetade under helgdagarna.||We worked during the holidays.|
The general rule is that prepositions are placed before the word they are referring to. However, there are a few ambipositions that may appear on either side of the head:
|Adposition||Meaning||Succeeding adposition (postposition)||Preceding adposition (preposition)||Translation|
|runt||around||riket runt||runt riket||around the Kingdom|
|emellan||between||bröder emellan||emellan bröder||between brothers|
|igenom||through||natten igenom||igenom natten||the night through / through the night|
Being a Germanic language, Swedish syntax shows similarities to both English and German. Like English, Swedish has a subject–verb–object basic word order, but like German, utilizes verb-second word order in main clauses, for instance after adverbs, adverbial phrases and dependent clauses. Adjectives generally precede the noun they determine, though the reverse is not infrequent in poetry. Nouns qualifying other nouns are almost always compounded on the fly (as with German, but less so with English); the last noun is the head.
A general word-order template may be drawn for a Swedish sentence, where each part, if it does appear, appears in this order. (Source—Swedish For Immigrants level 3).
|Fundament||Finite verb||Subject (if not fundament)||Clausal Adverb/negation||Non-finite verb (in infinitive or supine)||Object(s)||Spatial Adverb||Temporal Adverb|
|Conjunction||Subject||Clausal Adverb/Negation||Finite Verb||Non-finite verb (in infinitive or supine)||Object(s)||Spatial Adverb||Temporal Adverb|
The "Fundament" can be whatever constituent that the speaker wishes to topicalize, emphasize as the topic of the sentence. In the unmarked case, with no special topic, the subject is placed in the fundament position. Common fundaments are an adverb or object, but it is also possible to topicalize basically any constituent, including constituents lifted from a subordinate clause into the fundament position of the main clause: Honom vill jag inte att du träffar. (Him I do not want you to meet.) or even the whole subordinate clause: Att du följer honom hem accepterar jag inte. (That you follow him home I do not accept.). An odd case is the topicalization of the finite verb, which requires the addition of a "dummy" finite verb in the V2 position, so that the same clause has two finite verbs: Arbetade gjorde jag inte igår. (Worked did I not yesterday.)
- Källström, Roger. "Omarkerat neutrum?" (PDF). Göteborgs universitet. Retrieved 2008-03-26.[dead link]
- Pettersson, 150-51.
- Språkrådet. "Heter det Konungens av Danmark bröstkarameller eller Konungen av Danmarks bröstkarameller?" (in Swedish). Retrieved 21 July 2014.
- Språktidningen, "Så snabbt ökar hen i svenska medier", 18 March 2013. Retrieved 27 July 2014.
- "The Local", "Gender-neutral 'hen' makes its legal debut", 14 December 2012. Retrieved 27 July 2014.
- Terese Allert, "Allt vanligare med hen i barnböcker", Aftonbladet, 15 March 2013. Retrieved 27 July 2014.
- http://www.kristianstadsbladet.se/debatt/hall-hen-borta-fran-vara-barn/; in April 2015 it was added to Svenska Akademiens ordlista, the official glossary of the Swedish Academy
- Holmes, Philip & Hinchliffe, Ian (2008) Swedish: An Essential Grammar Routledge: New York ISBN 0-415-45800-5
- Holmes, Philip & Hinchliffe, Ian (2003) Swedish: A Comprehensive Grammar Routledge: New York ISBN 0-415-27884-8
- Pettersson, Gertrud (1996) Svenska språket under sjuhundra år: en historia om svenskan och dess utforskande Lund: Studentlitteratur ISBN 91-44-48221-3 | https://en.wikipedia.org/wiki/Swedish_grammar |
4.25 | As its name suggests, the elongate body of the small-scaled skink is covered in relatively small, glossy scales (2)(3). The background colour to the upperparts of the body is brownish grey, but a series of stripes extend lengthways from the snout towards the tail. Running down the middle of the back are consecutive segments of light and dark brown, adjoined on either side by a conspicuous pale stripe. A dark brown stripe, speckled above and below with pale markings, extends along the sides, while the belly is pale all over (2).
Very little is known about the biology of the small-scaled skink other than it is an active diurnal forager (2)(4). In captivity, it will consume a wide variety of invertebrates (2), but most New Zealand skinks are omnivorous with fruit and insects known to form a large proportion of their diet (3).
In captivity, the young are born from late January to early March with two to three offspring in each litter (2).
As with other New Zealand skinks, habitat loss and introduced mammalian predators are thought to present the greatest threat to the small-scaled skink (3)(4). Owing to these impacts, the small-scaled skink population is believed to be undergoing a serious decline (4).
With so many unknowns associated with the small-scaled skink, the immediate priority is to conduct further research into the species’ conservation status by obtaining data on its distribution, habitat use, relative abundance and threats, including the impact of mammalian predators. The collated information will then be used to determine the optimum means of ensuring the survival of this species (4).
Embed this ARKive thumbnail link ("portlet") by copying and pasting the code below. | http://www.arkive.org/small-scaled-skink/oligosoma-microlepis/ |
4.09375 | You are hereHome ›
Now showing results 1-10 of 23
In this activity, children use common craft materials and ultraviolet (UV)-sensitive beads to construct a person (or dog or imaginary creature). They use sunscreen, foil, paper, and more to test materials that might protect UV Kid from being exposed... (View More) to too much UV radiation. Includes background for facilitators. This activity is part of the "Explore!" series of activities designed to engage children in space and planetary science in libraries and informal learning environments. (View Less)
Each lesson or activity in this toolkit is related to NASA's Lunar Reconnaissance Orbiter (LRO). The toolkit is designed so that each lesson can be done independently, or combined and taught in a sequence. The Teacher Implementation Guide provides... (View More) recommendations for combining the lessons into three main strands: 1) Lunar Exploration. These lessons provide a basic introduction to Moon exploration. Note that this strand is also appropriate for use in social studies classes. 2) Mapping the Moon. These lessons provide a more in-depth understanding of Moon exploration through the use of scientific data and student inquiry. The lessons also include many connections to Earth science and geology. 3) Tools of Investigation. These higher-level lessons examine the role of technology, engineering and physics in collecting and analyzing data. (View Less)
This project engages students in the science and engineering processes used by NASA Astrobiologists as they explore our Solar System and try to answer the compelling question, "Are we Alone?" Students will identify science mission goals and select... (View More) an astrobiologically significant target of interest: Mars, Europa, Enceladus or Titan. Students will then design their mission to this target in search of their chosen biosignature(s). Students will encounter the same considerations and challenges facing NASA scientists and engineers as they search for life in our Solar System. Students will need to balance the return of their science data with engineering limitations such as power, mass and budget. Risk factors play a role and will add to the excitement in this interactive science and engineering activity. Astrobiobound! will help students see how science and systems engineering are integrated to achieve a focused scientific goal. Includes an alignment document for NGSS and Common Core State Standards. (View Less)
This activity focuses on the relationship between science of looking for life and the tools, on vehicles such as the Mars Rover, that make it possible. Learners will create their own models of a Mars rover. They determine what tools would be... (View More) necessary to help them better understand Mars (and something about life on Mars/its habitability). Then they work in teams to complete a design challenge where they incorporate these elements into their models, which must successfully complete a task. Teams may also work together to create a large-scale, lobby-sized version that may be put on display in the library to engage their community. The activity also includes specific tips for effectively engaging girls in STEM. This is activity 6 in Explore: Life on Mars? that was developed specifically for use in libraries. (View Less)
This is an annotated, topical list of science fiction novels and stories based on more or less accurate astronomy and physics ideas. Learners can read fictional works that involve asteroids, astronomers, black holes, comets, space travel where... (View More) Einstein's ideas are used correctly, exploding stars, etc. (View Less)
This is a set of four activities about spacecraft design. Learners will use the information learned in previous lessons, combined with their own creativity and problem-solving skills, to design and test a parachuting probe that will withstand a fall... (View More) from a high point, land intact, be able to descend slowly, float in liquid, and cost the least to launch into space. Includes a glossary, information for families, and guidance for deepening the science. This is lesson 7 of 8 in the Jewel of the Solar System: From Out-of-School to Outer Space an adaptation for afterschool programs of the Cassini-Huygens educational product Reading, Writing, and Rings. (View Less)
This is a series of three webpages about how humans and computers communicate. Learners will explore the binary and hexidecimal systems and how engineers use them to translate spacecraft data into images.
This is a game about data compression. Learners will use virtual foam balls to explore the different compression methods (lossless, lossy, and superchannel) used by the Earth Observing 3 mission.
This is a set of four activities about spacecraft design. Learners will think like engineers as they design, peer review, and then construct and present their spacecraft to travel to Saturn. Includes a glossary, information for families, and... (View More) guidance for deepening the science. This is lesson 5 of 8 in the Jewel of the Solar System: From Out-of-School to Outer Space an adaptation for afterschool programs of the Cassini-Huygens educational product Reading, Writing, and Rings. (View Less) | http://www.nasawavelength.org/resource-search?qq=&facetSort=1&educationalLevel=Informal+education&topicsSubjects=The+nature+of+technology&smdForumPrimary=Planetary+Science |
4.125 | NORD gratefully acknowledges Carole Samango-Sprouse, EdD, Executive Director and Chief Science Officer, The Focus Foundation, for assistance in the preparation of this report.
Trisomy X is a disorder that affects females and is characterized by the presence of an additional X chromosome. Normally, females have two X chromosomes; however, females with trisomy X carry three X chromosomes in the nuclei of body cells. There are specific physical features (phenotype) associated with this chromosomal disorder. Common symptoms that can potentially occur include language-based learning disabilities, developmental dyspraxia, tall stature, low muscle tone (hypotonia), and abnormal bending or curving of the pinkies toward the ring fingers (clinodactyly). Trisomy X occurs randomly as a result from errors during the division of reproductive cells in one of the parents. This disorder occurs in one in 900 to 1,000 live births.
The symptoms and physical features associated with trisomy X vary greatly from one person to another. Some females may have no symptoms (asymptomatic) or very mild symptoms and may go undiagnosed. Other women may have a wide variety of different abnormalities. It is important to note that affected individuals may not have all of the symptoms discussed below. Affected individuals should talk to their specialists and medical team about their specific case, associated symptoms and overall prognosis.
Trisomy X is often associated with developmental differences and language-based learning disabilities. Intelligence is usually within the normal range. IQ may be 10-15 points below that of siblings or control groups if early intervention has not been successful or begun early enough. Infants and children with trisomy X experience delays in attaining developmental milestones, especially in the acquisition of motor and speech skills. For example, walking may be delayed and affected girls may exhibit poor coordination and clumsiness. Speech and language development is also commonly delayed and may become apparent by approximately one year to 18 months. Girls with trisomy X have an increased frequency of language-based learning disabilities including reading deficiencies such as dyslexia, reading comprehension deficits and/or reading fluency issues in conjunction with other language-based disabilities. They also have developmental dyspraxia which affects learning in every domain. Typically, motor planning skills are deficient, which affects gross and fine motor, speech and language as well as executive function.
During early childhood or adolescence, girls with trisomy X usually exhibit increased height as compared with other girls their age (tall stature). Most girls are at or above the 75th percentile for height with an average height of 5 foot 7 inches.
In some cases, infants with trisomy X may have mild facial abnormalities including vertical skin folds that may cover the eyes’ inner corners (epicanthal folds), widely spaced eyes (hypertelorism), and smaller than normal head circumference. Most infants also have decreased muscle tone (hypotonia) and the fifth finger may be abnormally bent or curved mildly, which is called clinodactyly.
Individuals with trisomy X may have an increased incidence of anxiety and attention deficit hyperactivity disorder (ADHD). In some cases, such abnormalities improve with maturity and as the girls reach adulthood. Some individuals have minimal to no behavioral or emotional abnormalities; others have more issues that may necessitate intervention, which is typically only necessary short term. There are no controlled studies on behavioral or emotional abnormalities in trisomy X and the incidence of such conditions is unknown, although they are believed to occur with greater frequency than in the general population. Early detection and treatment are very beneficial for girls with trisomy X. In many cases, these girls have few issues later in life when identified early and treated appropriately.
In most cases, sexual development and fertility are normal. However, reports indicate that some affected females may have abnormal development of the ovaries (ovarian dysgenesis) and/or the uterus; delayed puberty or early onset of puberty (precocious puberty), and/or fertility problems. There have been reports of women with trisomy X developing premature ovarian failure (POF). POF is the loss of function of the ovaries before the age where menopause is expected to begin. POF can cause a decrease in the production of certain hormones and eggs may no longer be released each month.
Less often, additional abnormalities have been described in individuals with trisomy X including kidney abnormalities, such as absence of a kidney (unilateral renal agenesis) or malformation (dysplasia) of the kidneys; recurrent urinary tract infections; seizures; constipation; abdominal pain; flatfeet (pes planus); and pectus excavatum, a condition in which the breastbone is mildly depressed into the chest. Heart (cardiac) abnormalities have also been described in some isolated cases.
Trisomy X is a chromosomal abnormality characterized by the presence of an extra X chromosome. Chromosomes are found in the nucleus of all body cells. They carry the genetic characteristics of each individual. Pairs of human chromosomes are numbered from 1 through 22, with an unequal 23rd pair that normally consists of an X and Y chromosome for males and two X chromosomes for females. Thus, females with a normal chromosomal make-up (karyotype) have 46 chromosomes, including two X chromosomes (46,XX karyotype); they receive one chromosome from the mother and one from the father in each of the 23 pairs.
However, females with trisomy X have 47 chromosomes, three of which are X chromosomes (47,XXX karyotype). Trisomy X is a genetic disorder, but it is not inherited. The presence of the extra X chromosome results from errors during the normal division of reproductive cells in one of the parents (nondisjunction during meiosis). These errors occur randomly for no apparent reason (sporadically). Studies have shown that the risk of such errors increases with advanced paternal age. In most cases, the additional X chromosome comes from the mother. In approximately 20 percent of cases, nondisjunction events occur after conception in the developing fetus (postzygotic nondisjunction).
In some cases, only a certain percentage of an individual’s cells may have three X chromosomes, while others have a normal chromosomal make-up (46,XX/47,XXX mosaicism). Evidence suggests that such cases may be associated with milder symptoms and fewer developmental and learning problems, but further research is needed. Variants have also been described in which cells contain four or five X chromosomes (tetra X syndrome and penta X syndrome). Such variants are typically associated with more severe symptoms and findings. (For further information, please see the “Related Disorders” section of this report below.)
Researchers believe that the symptoms and physical features associated with trisomy X develop because of overexpression of the genes that escape normal X-inactivation. Although females have two X chromosomes, one of the X chromosomes is “turned off” and all of the genes on that chromosome are inactivated (X-inactivation). Researchers suspect that the presence of a third X chromosome allows genes normally “turned off” to be expressed. However, the exact manner in which the extra X chromosome ultimately causes the symptoms and physical features of trisomy X is not fully understood.
Trisomy X is a chromosomal disorder that affects only females. Reported estimates concerning the disorder’s frequency have varied with the most common estimate being one in 1,000 female births. Because many females with the disorder may have few or no symptoms, they may never be diagnosed. Researchers believe that the disorder is underdiagnosed and that the reported number of cases as reflected in the medical literature is inappropriately low. Researchers believe that only approximately 10 percent of cases are diagnosed. With increased detection, more in depth studies may be conducted and more girls with triple X can be appropriately treated.
Trisomy X may be suspected based upon the identification of characteristic developmental, behavioral or learning disabilities. A diagnosis may be confirmed by a thorough clinical evaluation, a detailed family history, and certain specialized tests such as chromosomal analysis performed on blood samples that can reveal the presence of an extra X chromosome in body cells.
In addition, trisomy X is increasingly being diagnosed before birth (prenatally) based on chromosomal analysis performed subsequent to amniocentesis or chorionic villus sampling (CVS). During amniocentesis, a sample of fluid that surrounds the developing fetus is removed and analyzed, while CVS involves the removal of tissue samples from a portion of the placenta.
Approximately 5-15 percent of women with Turner syndrome also have a 47,XXX karyotype found in certain white blood cells (blood lymphocytes), but the characteristic Turner syndrome karyotype (45,X) in other cells.
Specific therapeutic strategies depend upon several factors including the age of an affected individual upon diagnosis, the specific symptoms that are present and the overall severity of the disorder in each case. Early intervention services are recommended for infants and children diagnosed with trisomy X. Experts advise developmental assessment by age four months to evaluate muscle tone and strength; language and speech assessment by 12 months of age to evaluate expressive and receptive language development; and pre-reading assessment during preschool years prior to first grade to look for early signs of reading dysfunction. An evaluation is recommended to help assess additional learning disabilities and social and emotional problems.
Evidence suggests that affected children are greatly responsive to early intervention services and treatment. Such services can include speech therapy, occupational therapy, physical therapy, and developmental therapy and counseling.
Infants and children with trisomy X should also receive kidney (renal) and heart (cardiac) evaluations to detect abnormalities of those organs potentially associated with the disorder. Adolescent and adult women who exhibit late periods (menarche), menstrual abnormalities, or fertility issues should be evaluated for primary ovarian failure.
Genetic counseling will be of benefit for affected individuals and their families. Additional treatment for this disorder should be targeted at infancy for physical therapy, between 12 and 15 months for speech delay, prior to first grade for early signs of reading dysfunction and by third grade for anxiety and ADHD. Adolescence is challenging for children, and girls with triple X often struggle as they enter middle school years so counseling short term may be necessary to help them during these turbulent years.
Information on current clinical trials is posted on the Internet at www.clinicaltrials.gov. All studies receiving U.S. Government funding, and some supported by private industry, are posted on this government web site.
For information about clinical trials being conducted at the NIH Clinical Center in Bethesda, MD, contact the NIH Patient Recruitment Office:
Tollfree: (800) 411-1222
TTY: (866) 411-1010
For information about clinical trials sponsored by private sources, contact:
For information about clinical trials conducted in Europe, contact:
(Please note that some of these organizations may provide information concerning certain conditions potentially associated with this disorder [e.g., learning disabilities].)
Speicher MR, Antonarakis SE, Motulsky AG. Eds. Vogel and Motulsky’s Human Genetics: Problems and Approaches. 4th ed. Springer. New York, NY; 2009:124.
Samango-Sprouse CA. XXX Syndrome (Triple X Syndrome). NORD Guide to Rare Disorders. Lippincott Williams & Wilkins. Philadelphia, PA. 2003:89.
Samango-Sprouse CA Frontal Lobe Development in Childhood. The Human Frontal Lobe: Functions and Disorders, 2nd Edition, Eds. BL Miller, and JL Cummings, Guilford Press, New York, 2007.
Rimoin D, Connor JM, Pyeritz RP, Korf BR. Eds. Emory and Rimoin’s Principles and Practice of Medical Genetics. 4th ed. Churchill Livingstone. New York, NY; 2002:1195-1196.
Otter M, Schrander-Stumpel CT, Curfs LM. Triple X syndrome: a review of the literature. Eur J Hum Genet. 2010;18:265-271.
Krusinskie V, Alvesalo L, Sidlauskas A. The craniofacial complex in 47,XXX females. Eur J Orthod. 2005;27:396-401.
Liebezeit BU, Rohrer TR, Singer H, Doerr HG. Tall stature as presenting symptom in a girl with triple X syndrome. J Pediatr Endocrinol Metab. 2003;16:233-235.
Rovet J, Netley C, Bailey J, Keenan M, Stewart D. Intelligence and achievement in children extra X aneuploidy: a longitudinal perspective. Am J Med Genet. 1995;60:356-363.
Raticliffe SG, Pan H, McKie M. The growth of XXX females: population-based studies. Ann Hum Biol. 1994;21:57-66.
Samango-Sprouse CA, Rogol A. XXY: The Hidden Disability and Prototype for Infantile Presentation of Developmental Dyspraxia (IDD). Infants and Young Children. 2002;15:11-18.
Tartaglia NR, Howell S, Sutherland A, Wilson R, Wilson L. A review of trisomy X (47,XXX). Orphanet encyclopedia, 2010. Available at: http://www.ojrd.com/content/5/1/8 Accessed March 25, 2014.
Mayo Clinic for Medical Education and Research. Triple X Syndrome. Nov. 08, 2012. Available at: http://www.mayoclinic.com/health/triple-x-syndrome/DS01090 Accessed March 25, 2014.
The information in NORD’s Rare Disease Database is for educational purposes only and is not intended to replace the advice of a physician or other qualified medical professional.
The content of the website and databases of the National Organization for Rare Disorders (NORD) is copyrighted and may not be reproduced, copied, downloaded or disseminated, in any way, for any commercial or public purpose, without prior written authorization and approval from NORD. Individuals may print one hard copy of an individual disease for personal use, provided that content is unmodified and includes NORD’s copyright.
National Organization for Rare Disorders (NORD)
55 Kenosia Ave., Danbury CT 06810 • (203)744-0100 | http://rarediseases.org/rare-diseases/trisomy-x/ |
4.09375 | Introduction to Named Pipes
Bash uses named pipes in a really neat way. Recall that when you enclose a command in parenthesis, the command is actually run in a “subshell”; that is, the shell clones itself and the clone interprets the command(s) within the parenthesis. Since the outer shell is running only a single “command”, the output of a complete set of commands can be redirected as a unit. For example, the command:
(ls -l; ls -l) >ls.out
writes two copies of the current directory listing to the file ls.out.
Command substitution occurs when you put a < or > in front of the left parenthesis. For instance, typing the command:
cat <(ls -l)
results in the command ls -l executing in a subshell as usual, but redirects the output to a temporary named pipe, which bash creates, names and later deletes. Therefore, cat has a valid file name to read from, and we see the output of ls -l, taking one more step than usual to do so. Similarly, giving >(commands) results in Bash naming a temporary pipe, which the commands inside the parenthesis read for input.
If you want to see whether two directories contain the same file names, run the single command:
cmp <(ls /dir1) <(ls /dir2)
The compare program cmp will see the names of two files which it will read and compare.
Command substitution also makes the tee command (used to view and save the output of a command) much more useful in that you can cause a single stream of input to be read by multiple readers without resorting to temporary files—bash does all the work for you. The command:
ls | tee >(grep foo | wc >foo.count) \ >(grep bar | wc >bar.count) \ | grep baz | wc >baz.count
counts the number of occurrences of foo, bar and baz in the output of ls and writes this information to three separate files. Command substitutions can even be nested:
cat <(cat <(cat <(ls -l))))works as a very roundabout way to list the current directory.
As you can see, while the unnamed pipes allow simple commands to be strung together, named pipes, with a little help from bash, allow whole trees of pipes to be created. The possibilities are limited only by your imagination.
Practical books for the most technical people on the planet. Newly available books include:
- Agile Product Development by Ted Schmidt
- Improve Business Processes with an Enterprise Job Scheduler by Mike Diehl
- Finding Your Way: Mapping Your Network to Improve Manageability by Bill Childers
- DIY Commerce Site by Reven Lerner
Plus many more. | http://www.linuxjournal.com/article/2156?page=0,1 |
4.09375 | |Classification and external resources|
Intestinal parasites are parasites that can infect the gastro-intestinal tract of humans and other animals. They can live throughout the body, but most prefer the intestinal wall. Means of exposure include: ingestion of undercooked meat, drinking infected water, and skin absorption. The two main types of intestinal parasites are those helminths and protozoa that reside in the intestines (not all helminths and protozoa are intestinal parasites). An intestinal parasite can damage or sicken its host via an infection which is called helminthiasis in the case of helminths.
Signs and symptoms
These depend on the type of infection.
The major groups of parasites include protozoans (organisms having only one cell) and parasitic worms (helminths). Of these, protozoans, including cryptosporidium, microsporidia, and isospora, are most common in HIV-infected persons. Each of these parasites can infect the digestive tract, and sometimes two or more can cause infection at the same time.
Parasites can get into the intestine by going through the mouth from uncooked or unwashed food, contaminated water or hands, or by skin contact with larva infected soil; they can also be transferred by the sexual act of anilingus in some cases. When the organisms are swallowed, they move into the intestine, where they can reproduce and cause symptoms. Children are particularly susceptible if they are not thoroughly cleaned after coming into contact with infected soil that is present in environments that they may frequently visit such as sandboxes and school playgrounds. People in developing countries are also at particular risk due to drinking water from sources that may be contaminated with parasites that colonize the gastrointestinal tract.
Due to the wide variety of intestinal parasites, a description of the symptoms rarely is sufficient for diagnosis. Instead, two common tests are used: stool samples may be collected to search for the parasites, and an adhesive may be applied to the anus in order to search for eggs.
Good hygiene is necessary to avoid reinfection. The Rockefeller Foundation's hookworm campaign in Mexico in the 1920s was extremely effective at eliminating hookworm from humans with the use of antihelminthics. However, preventative measures were not adequately introduced to the people that were treated. Therefore, the rate of reinfection was extremely high and the project evaluated through any sort of scientific method was a marked failure. More education was needed to inform the people of the importance of wearing shoes, using latrines (better access to sanitation), and good hygiene.
Drugs are frequently used to kill parasites in the host. In earlier times, turpentine was often used for this, but modern drugs do not poison intestinal worms directly. Rather, antihelmintic drugs now inhibit an enzyme that is necessary for the worm to make the substance that prevents the worm from being digested.
For example, tapeworms are usually treated with a medicine taken by mouth. The most commonly used medicine for tapeworms is Praziquantel. Praziquantel is also used to treat infections of certain parasites (e.g., Schistosoma and liver flukes).
- Loukopoulos P, Komnenou A, Papadopoulos E , Psychas V. Lethal Ozolaimus megatyphlon infection in a green iguana (Iguana iguana rhinolopa). Journal of Zoo and Wildlife Medicine 2007; 38:131-134
- Birn, Anne-Emanuelle, and Armando Solórzano. 1999. Public health policy paradoxes: science and politics in the Rockefeller Foundation's hookworm campaign in Mexico in the 1920s. Social Science & Medicine 49 (9):1197-1213 | https://en.wikipedia.org/wiki/Intestinal_worms |
4.03125 | Print this page.
Home / Browse / Mississippi Alluvial Plain
The Mississippi Alluvial Plain (a.k.a. Delta) is a distinctive natural region, in part because of its flat surface configuration and the dominance of physical features created by the flow of large streams. This unique physiography occupies much of eastern Arkansas including all or parts of twenty-seven counties. The Alluvial Plain, flatter than any other region in the state, has elevations ranging from 100 to 300 feet above sea level. In Arkansas, the Alluvial Plain extends some 250 miles in length from north to south and varies in width from east to west from only twelve miles in Desha County to as much as ninety-one miles measured from Little Rock (Pulaski County) to the Mississippi River.
The work of large rivers (including the Mississippi, Arkansas, White, and St. Francis rivers) and other smaller rivers and streams has played an important role in forming the character of the landscape. These rivers eroded older deposits and built up deep layers of soil, gravel, and clay transported from slopes as far away as the Rocky Mountains to the west and the Appalachians to the east. The result of these alluvial processes is a terrain and soil suitable for large-scale farming. In fact, the Mississippi Alluvial Plain is one of the most agriculturally productive regions in the world.
Alluvial (stream-deposited) material covers almost the entire region. Interestingly, terraces are found throughout the Alluvial Plain, frequently paralleling streams but at a slightly higher elevation than the adjacent stream banks. These terraces are older than present bottomlands and represent former levels of bottomland through which streams have now eroded. The so-called recent alluvium has been deposited over the last 12,000 years and contains fertile “water-washed” material, especially silt.
The deep, fertile soils of the Mississippi Alluvial Plain are sometimes extremely dense and poorly drained. The combination of flat terrain and poor drainage creates conditions suitable for wetlands. Wetlands, areas where the periodic or permanent presence of water controls the characteristics of the environment and associated plants and animals, now cover approximately eight percent of Arkansas’s land surface. While some wetland areas remain intact, many have been drained and converted to agricultural land uses. Protecting the remaining wetlands and encouraging the restoration of some former wetland areas are significant natural resource conservation issues.
At one time, wetlands were very abundant across the Mississippi Alluvial Plain. The decline in wetlands began years ago when the first ditches were dug to drain extensive areas of the Alluvial Plain. Clearing bottomland hardwoods for agriculture and other activities has resulted in the loss of more than seventy percent of the original wetlands.
The majority of Arkansas’s wetlands, occupying a diverse physiographic setting, are often riverine and depressional wetlands associated with the floodplains of the Mississippi River and its major tributaries. Some of the most significant wetlands are referred to as “bottoms” or “bottomland hardwood forests.” Of particular importance is the Cache River and lower White River area, where impressive stands of bottomland hardwoods are found. It represents the largest continuous expanse of bottomland hardwoods in the Lower Mississippi Valley. Nearly one-third of the remaining bottomland hardwoods in the Arkansas Delta are found within the ten-year floodplain of the Cache and lower White rivers.
The wetlands of the Delta offer an internationally important winter habitat for migratory water fowl. The White River National Wildlife Refuge alone is a temporary home to between 3,000 and 10,000 Canada geese and up to 300,000 ducks per year. These large numbers account for one-third of the total found in Arkansas and ten percent of the Mississippi Flyway total.
Wetlands of Arkansas serve many important functions in addition to being a vital wildlife habitat, including flood storage and flood prevention, natural water quality improvement (sediment traps, for example), shoreline erosion protection, groundwater recharge, recreational opportunities, and aesthetic beauty.
The original natural vegetation of the region was significantly different from the other natural regions in Arkansas in part because of the region’s wetland characteristics. It was largely southern floodplain forest suited to the wet, poorly drained soils. Cypress-tupelo-gum types occupied the wettest sites. The willow oak and overcup oak were found on flat and poorly drained locations, and oak-hickory on higher and better drained terrace sites of the floodplain.
Currently, the Mississippi Alluvial Plain has been widely cleared and drained for cultivation. The widespread loss or degradation of forest and wetland habitat has impacted wildlife and reduced bird populations. Relatively small plots of natural vegetation remain along streams, in areas unsuitable for agriculture, or within areas protected from clearing and development. The most significant of these protected areas are the Big Lake National Wildlife Refuge, the Sunken Lands Wildlife Management Areas, the Wapanocca National Wildlife Refuge, the St. Francis National Forest, and the White River National Wildlife Refuge.
A rather unique feature of this region is the Grand Prairie, an area of prairie soils and grasses that are found primarily in Arkansas and Prairie counties in eastern Arkansas. These prairie soils, with their very compact clay subsoil, are more suitable for grass than trees as the natural vegetation cover. The Grand Prairie is an extremely productive agricultural region and is noted for its high yields of rice. Stuttgart (Arkansas County) is known as the rice capital and duck capital of the world.
Another important characteristic of the Alluvial Plain is related to a significant natural hazard, earthquakes. These may occur along the New Madrid Seismic Zone. This seismic zone is a prolific source of intra-plate earthquakes (earthquakes within a tectonic plate) in the southern and mid-western United States. This seismic zone was responsible for the 1811–1812 New Madrid Earthquakes and has the potential to produce large earthquakes in the future. Several relatively small earthquakes have been recorded in the region since 1812, but an important question remains concerning the next “big” earthquake in terms of when it will occur and at what magnitude. As of 2011, according to some experts, there is a ten percent chance of a magnitude 7.0 quake within the next fifty years along the fault that extends from New Madrid, Missouri, to Marked Tree (Poinsett County) and beyond.
In addition to a unique physical landscape, the Mississippi Alluvial Plain has a number of distinctive cultural/demographic characteristics. In an article titled “Delta Population Trends: 1990–2000,” Jason Combs discusses the significant population decline that has occurred within counties bordering the Mississippi River. Population decline, economic depression, and other negative socioeconomic factors characterize many of these Delta counties. Data released from the 2010 census shows that the population decline is continuing within several Arkansas counties that are adjacent to or near the Mississippi River. The most significant population declines (between -10.1 and -20.5 percent) from 2000 to 2010 were in Mississippi, Lee, Phillips, Desha, Chicot, Monroe, and Woodruff counties. These counties have relatively high rates of unemployment and few or no positive features to reverse the trend of population decline, according to Combs.
These and other Delta counties have experienced population decline for a variety of reasons, in addition to high unemployment. According to Combs, part of the problem is the image of the Delta. Strained race relations, poverty, and resistance to social change have “tarnished” the Delta’s image and contributed to the absence of substantial economic development. Moreover, most Americans perceive the Delta as “flat and uninteresting, not a place to go for recreation, retirement, or a glamorous job,” according to an article by Richard Lonsdale and J. Clark Archer. Another contributing factor to the population loss in the Delta is agricultural mechanization. Improvement in mechanization and modern science allowed fewer farmers to produce as much or more agricultural output on the same amount of land with far less labor. The need for fewer farm workers coupled with the absence of other job opportunities has been a significant contributing factor to the population decline and the economic depression that many Delta counties are experiencing.
In summary, the Mississippi Alluvial Plain is a natural region with several distinguishing characteristics, including an extremely flat surface topography; deep alluvial soils; poor drainage; wetland areas; widely scattered bottomland and hardwood forests; large and highly productive farms; counties plagued by economic depression and population loss; and the Mississippi Flyway, with ideal locations for hunting, fishing, and other water-related sports activities. The result is a region marked by sharp social contrast: pockets of prosperity and wealth exist aside poverty and economic despair.
For additional information:
Arkansas Department of Planning. Arkansas Natural Area Planning. Little Rock: State of Arkansas, 1974.
Collins, Janelle, ed. Defining the Delta: Multidisciplinary Perspectives on the Lower Mississippi River Delta. Fayetteville: University of Arkansas Press, 2015.
Combs, Jason. “Delta Population Trends: 1990–2000.” Arkansas Review: A Journal of Delta Studies 34 (April 2004): 26–35.
“Delta Geography.” Delta Cultural Center. http://www.deltaculturalcenter.com/geography/ (accessed July 18, 2011).
“Ecoregions of the Mississippi Alluvial Plain.” The Encyclopedia of Earth. http://www.eoearth.org/article/Ecoregions_of_the_Mississippi_Alluvial_Plain_%28EPA%29 (accessed July 18, 2011).
Hagge, Patrick David. “The Decline and Fall of a Cotton Empire: Economic and Land-Use Change in the Lower Mississippi River ‘Delta’ South, 1930–1970.” PhD diss., Pennsylvania State University, 2013.
Lonsdale, Richard, and J. Clark Archer. “Emptying Areas of the United States, 1990–1995.” Journal of Geography 97 (1998): 108–122.
Stroud, Hubert B., and Gerald T. Hanson. Arkansas Geography: The Physical Landscape and the Historical-Cultural Setting. Little Rock: Rose Publishing Company, 1981.
“Wetlands in Arkansas.” Arkansas Multi-Agency Wetland Planning Team. http://www.mawpt.org/wetlands/ (accessed July 18, 2011).
Whayne, Jeannie, and Willard B. Gatewood, eds. The Arkansas Delta: Land of Paradox. Fayetteville: University of Arkansas Press, 1993.
Hubert B. Stroud
Arkansas State University
Last Updated 12/11/2015
About this Entry: Contact the Encyclopedia / Submit a Comment / Submit a Narrative | http://www.encyclopediaofarkansas.net/encyclopedia/entry-detail.aspx?entryID=444 |
4.0625 | palette(1) In computer graphics, a palette is the set of available colors. For a given application, the palette may be only a subset of all the colors that can be physically displayed. For example, a SVGA system can display 16 million unique colors, but a given program would use only 256 of them at a time if the display is in 256-color mode. The computer system's palette, therefore, would consist of the 16 million colors, but the program's palette would contain only the 256-color subset.
A palette is also called a CLUT (color look-up table).
(2) In paint and illustration programs, a palette is a collection of symbols that represent drawing tools. For example, a simple palette might contain a paintbrush, a pencil, and an eraser.
Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now.
Webopedia's student apps roundup will help you to better organize your class schedule and stay on top of assignments and homework. Read More »20 Ways to Shorten a URL
If you need to shorten a long URL try this list of 20 free online redirection services. Read More »Top 10 Tech Terms of 2015
The most popular Webopedia definitions of 2015. Read More »
This Webopedia study guide describes the different parts of a computer system and their relations. Read More »Network Fundamentals Study Guide
Networking fundamentals teaches the building blocks of modern network design. Learn different types of networks, concepts, architecture and... Read More »The Five Generations of Computers
Learn about each of the five generations of computers and major technology developments that have led to the current devices that we use today. Read More » | http://www.webopedia.com/TERM/P/palette.html |
4.34375 | So You Want to Be President? Lesson Plan
Activities to do before and after reading the book by Judith St. George
- Grades: 3–5
About this book
Humorous and just slightly off-kilter, this book is sure to entertain while it enlightens.
- Learn interesting and often little-known facts about the political leaders who have governed our nation from its beginning.
- Understand how democratic values came to be, and how they have been exemplified by people, events, and symbols.
Before Reading the Book
The Name Game
Got any history buffs in your class? Test everyone's knowledge with a quick and easy name game.
- On your blackboard, write the numbers 1–42.
- Ask students to pick their brains and see how many presidents they can name.
- Using the list at the back of So You Want to be President? (if necessary), write each president your class can name in his proper spot.
- Try to spot any trends or patterns in the list — lots of men named James, some relatives, etc.
- You may want to fill in the list, as a class, after you've read the book.
Teaching Plan: Activities
Even United States Presidents started out as regular kids! You can help students understand presidential legacies by imagining their own.
- Ask students to think about any interesting or important facts about their own lives.
- On a piece of paper, ask each to create a time line of his or her life to this point. (For example, born on this day, little sister arrived on this day, started piano lessons on this day, etc.).
- Next, have your class think about what they'd like their future to hold. Ask them to record these anticipated future events in a different color pencil or ink.
- When each has finished, have students share their "legacies," both current and anticipated, with the class.
- Post timelines on a classroom bulletin board.
My Favorite President
So You Want to Be President? is chock-full of interesting tidbits about our Presidents. Use them to captivate your students and encourage them to learn more!
- Using the information gleaned from the book, ask each student to choose a favorite president. The choice should leave aside political associations; it should be based solely on the trivia presented.
- Have students create a list of facts about their chosen president.
- Ask each student to give an oral presentation about his or her favorite. The very brief report could begin with a list of facts about the president and conclude with reasons why the student found these qualities interesting or appealing.
- As a class, discuss each student's choice; were some presidents chosen more often than others? Analyze the outcome.
Other Books About the Electoral Process
Presidential Elections and Other Cool Facts by Syl Sobel
This handy resource book outlines the legal requirements for electing a president, discusses the Electoral College, campaign rules and practices, and much more!
Landslide! A Kid's Guide to the U.S. Elections by Dan Gutman
How does a voting machine work? Who belongs to the Electoral College? What happens if there's a tie? Find answers to these questions, and many more.
Other Books by Judith St. George
Teaching Plan written by Rebecca Gómez. | http://www.scholastic.com/teachers/lesson-plan/so-you-want-be-president-lesson-plan |
4.09375 | |This article needs additional citations for verification. (September 2009)|
- details about the world itself and the experiences of its characters are revealed explicitly through narrative
- the story is told or recounted, as opposed to shown or enacted.
In diegesis the narrator tells the story. The narrator presents the actions (and sometimes thoughts) of the characters to the readers or audience.
In contrast to mimesis
Diegesis (Greek διήγησις "narration") and mimesis (Greek μίμησις "imitation") have been contrasted since Plato's and Aristotle's times. Mimesis shows rather than tells, by means of action that is enacted. Diegesis is the telling of the story by a narrator. The narrator may speak as a particular character or may be the invisible narrator or even the all-knowing narrator who speaks from "outside" in the form of commenting on the action or the characters.
In Book III of his Republic (c. 373 BC), Plato examines the "style" of "poetry" (the term includes comedy, tragedy, epic and lyric poetry): All types narrate events, he argues, but by differing means. He distinguishes between narration or report (diegesis) and imitation or representation (mimesis). Tragedy and comedy, he goes on to explain, are wholly imitative types; the dithyramb is wholly narrative; and their combination is found in epic poetry. When reporting or narrating, "the poet is speaking in his own person; he never leads us to suppose that he is any one else"; when imitating, the poet produces an "assimilation of himself to another, either by the use of voice or gesture". In dramatic texts, the poet never speaks directly; in narrative texts, the poet speaks as him or herself.
In his Poetics, the ancient Greek philosopher Aristotle argues that kinds of "poetry" (the term includes drama, flute music, and lyre music for Aristotle) may be differentiated in three ways: according to their medium, according to their objects, and according to their mode or "manner" (section I); "For the medium being the same, and the objects the same, the poet may imitate by narration — in which case he can either take another personality as Homer does, or speak in his own person, unchanged — or he may present all his characters as living and moving before us" (section III).
In filmmaking, the term is used to name the story depicted on screen—as opposed to the story in real life time that the screen narrative is about. Diegesis may concern elements, such as characters, events and things within the main or primary narrative. However, the author may include elements which are not intended for the primary narrative, such as stories within stories; characters and events that may be referred to elsewhere or in historical contexts and that are therefore outside the main story and are thus presented in an extradiegetic situation.
For narratologists, all parts of narratives — characters, narrators, existents, actors — are characterized in terms of diegesis. For definitions of diegesis, one should consult Aristotle's Poetics; Gerard Genette's Narrative Discourse: An Essay in Method (Cornell University Press, 1980); or (for a readable introduction) H. Porter Abbott's The Cambridge Introduction to Narrative (Cambridge University Press 2002). In literature, discussions of diegesis tend to concern discourse/sjužet (in Russian Formalism) (vs. story/fabula).
Diegesis is multi-levelled in narrative fiction. Genette distinguishes between three "diegetic levels". The extradiegetic level (the level of the narrative's telling) is, according to Prince, "external to (not part of) any diegesis." One might think of this as what we commonly understand to be the narrator's level, the level at which exists a narrator who is not part of the story being told. The diegetic level or intradiegetic level is understood as the level of the characters, their thoughts and actions. The metadiegetic level or hypodiegetic level is that part of a diegesis that is embedded in another one and is often understood as a story within a story, as when a diegetic narrator himself/herself tells a story.
The classical distinction between the diegetic mode and the mimetic mode relate to the difference between the epos (or epic poetry) and drama. The "epos" relates stories by telling them through narration, while drama enacts stories through direct embodiment (showing). In terms of classical poetics, the cinema is an epic form that utilizes dramatic elements; this is determined by the technologies of the camera and editing. Even in a spatially and temporally continuous scene (mimicking the theatrical situation, as it were), the camera chooses where to look for us. In a similar way, editing causes us to jump from one place (and/or time) to another, whether it be somewhere else in the room, or across town. This jump is a form of narration; it is as if a narrator whispers to us: "meanwhile, on the other side of the forest". It is for this reason that the "story-world" in cinema is referred to as "diegetic"; elements that belong to the film's narrative world are diegetic elements. This is why, in the cinema, we may refer to the film's diegetic world.
"Diegetic", in the cinema, typically refers to the internal world created by the story that the characters themselves experience and encounter: the narrative "space" that includes all the parts of the story, both those that are and those that are not actually shown on the screen (such as events that have led up to the present action; people who are being talked about; or events that are presumed to have happened elsewhere or at a different time).
Thus, elements of a film can be "diegetic" or "non-diegetic". These terms are most commonly used in reference to sound in a film, but can apply to other elements. For example, an insert shot that depicts something that is neither taking place in the world of the film, nor is seen, imagined, or thought by a character, is a non-diegetic insert. Titles, subtitles, and voice-over narration (with some exceptions) are also non-diegetic.
Film sound and music
Sound in films is termed diegetic (termed source music by professionals in the radio, film and television industry) if it is part of the narrative sphere of the film. For instance, if a character in the film is playing a piano, or turns on a CD player, the resulting sound is diegetic. The cantina band sequence in the original Star Wars is an example of diegetic music in film, with the band playing instruments and swaying to the beat, as patrons are heard reacting to the second piece the band plays. If, on the other hand, music plays in the background but cannot be heard by the film's characters, it is termed non-diegetic or extradiegetic. Songs are commonly used in various film sequences to serve different purposes. They can be used to link scenes in the story where a character progresses through various stages toward a final goal. An example of this is in Rocky: Bill Conti's "Gonna Fly Now" plays non-diegetically as Rocky makes his way through his training regimen finishing on the top steps of the Philadelphia Museum of Art with his hands raised in the air. Mickey Mousing is an example of extradiegetic music.
This distinction may be toyed with in order to break the fourth wall. In the Archer episode "Sea Tunt Part 1", Cheryl begins to hear music that would otherwise seem to be non-diegetic. She even comments: "Just ignore it; it's non-diegetic."
In musical theater
In musical theater, as in film, the term "diegesis" refers to the context of a musical number in a work's theatrical narrative. In typical operas or operettas, musical numbers are non-diegetic; characters are not aware that they are singing. In contrast, when a song occurs literally in the plot, the number is considered diegetic. Diegetic numbers are often present in backstage musicals.
For example, in The Sound of Music, the song "Edelweiss" is diegetic, since the characters are aware they are singing. The character Maria is using the song to teach the children how to sing. In contrast, the song "How Do You Solve A Problem Like Maria?" is non-diegetic, since the musical material is external to the narrative.
In both the 1936 and the 1951 film versions of Show Boat, as well as in the original stage version, the song "Bill" is diegetic. The character Julie LaVerne sings it during a rehearsal in a nightclub. A solo piano (played onscreen) accompanies her, and the film's offscreen orchestra (presumably not heard by the characters) sneaks in for the second verse of the song. Julie's other song in the film, "Can't Help Lovin' Dat Man" is also diegetic. In the 1936 film, it is supposed to be an old folk song known only to blacks; in the 1951 film it is merely a song which Julie knows; however, she and the captain's daughter Magnolia are fully aware that Julie is singing. When Julie, Queenie, and the black chorus sing the second chorus of the song in the 1936 version, they are presumably unaware of any orchestral accompaniment, but in the 1951 film, when Magnolia sings and dances this same chorus, she does so to the accompaniment of two deckhands on the boat playing a banjo and a harmonica. Two other songs in the 1936 Show Boat are also diegetic: "Goodbye My Lady Love" (sung by the comic dancers Ellie and Frank), and "After the Ball", sung by Magnolia. Both are interpolated into the film, and both are performed in the same nightclub in which Julie sings Bill.
In the television series Buffy the Vampire Slayer, the episode entitled "Once More, with Feeling" toys with the distinction between diegetic and non-diegetic musical numbers. In this episode, the Buffy characters find themselves compelled to burst into song in the style of a musical. The audience is led to assume that this is a "musical episode", in which the characters are unaware that they are singing. It becomes clear that the characters are all too aware of their musical interludes, and that determining the supernatural causes of the singing is the focus of the episode's story.
In video games
In video games, "diegesis" comprises the narrative game world, its characters, objects and actions which can be classified as "intra-diegetic". Status icons, menu bars and other UI which are not part of the game world itself can be considered as "extra-diegetic"; a game character does not know about them even though for the player they may present crucial information. In this respect, these elements can be considered part of the narration provided by the game itself, although this will usually be a separate and distinct voice from that of the Story narrator, if there is one. A noted example of a diegetic interface in video games is that of the Dead Space series, in which the player-character is equipped with an advanced survival suit that projects holographic images to the character within the game's rendering engine that also serve as the game's user-interface to the player to show weapon selection, inventory management, and special actions that can be taken.
- Gerald Prince, A Dictionary of Narratology, 2003, University of Nebraska Press, ISBN 0-8032-8776-3
- An etext of Plato's Republic is available from Project Gutenberg. The most relevant section is the following: "You are aware, I suppose, that all mythology and poetry is a narration of events, either past, present, or to come? / Certainly, he replied. / And narration may be either simple narration, or imitation, or a union of the two? / [...] / And this assimilation of himself to another, either by the use of voice or gesture, is the imitation of the person whose character he assumes? / Of course. / Then in this case the narrative of the poet may be said to proceed by way of imitation? / Very true. / Or, if the poet everywhere appears and never conceals himself, then again the imitation is dropped, and his poetry becomes simple narration."(Plato, Republic, Book III.)
- Plato, Republic, Book III.
- See also Pfister (1977, 2-3) and Elam: "classical narrative is always oriented towards an explicit there and then, towards an imaginary "elsewhere" set in the past and which has to be evoked for the reader through predication and description. Dramatic worlds, on the other hand, are presented to the spectator as "hypothetically actual" constructs, since they are "seen" in progress "here and now" without narratorial mediation. [...] This is not merely a technical distinction but constitutes, rather, one of the cardinal principles of a poetics of the drama as opposed to one of narrative fiction. The distinction is, indeed, implicit in Aristotle's differentiation of representational modes, namely diegesis (narrative description) versus mimesis (direct imitation)" (1980, 110-111).
- Elam (1980, 110-111).
- https://archive.org/details/GregoryKurczynskiOnOutsightRadioHours, Interview with a filmmaker on the diegetic role of music in film
- Tach, Dave (13 March 2013). "Deliberately diegetic: Dead Space's lead interface designer chronicles the UI's evolution at GDC". Polygon. Retrieved 15 April 2015.
- Aristotle. 1974. "Poetics". Trans. S.H. Butcher. In Dramatic Theory and Criticism: Greeks to Grotowski. Ed. Bernard F. Dukore. Florence, KY: Heinle & Heinle. ISBN 0-03-091152-4. p. 31-55.
- Bunia, Remigius. 2010. "Diegesis and Representation: Beyond the Fictional World, on the Margins of Story and Narrative," Poetics Today 31.4, 679–720. doi:10.1215/03335372-2010-010.
- Elam, Keir. 1980. The Semiotics of Theatre and Drama. New Accents Ser. London and New York: Methuen. ISBN 0-416-72060-9.
- Pfister, Manfred. 1977. The Theory and Analysis of Drama. Trans. John Halliday. European Studies in English Literature Ser. Cambridige: Cambridge University Press, 1988. ISBN 0-521-42383-X.
- Plato. c. 373 BC. Republic. Retrieved from Project Gutenberg on 2 September 2007.
- Coyle, R. (2004). Pop goes the music track. Metro Magazine, 140, 94-95.
- An Introduction to Film Analysis: Technique and Meaning in Narrative Film: Michael Ryan, Melissa Lenos: 9780826430021: Amazon.com: Books. The Continuum International Publishing Group, n.d. Web. 3 May 2013
- The dictionary definition of diegesis at Wiktionary | https://en.wikipedia.org/wiki/Diegesis |
4 | New information about what is inside Mars shows the Red Planet has a molten liquid-iron core, confirming the interior of the planet has some similarity to Earth and Venus.
Researchers at NASA's Jet Propulsion Laboratory (JPL), Pasadena, Calif., analyzing three years of radio tracking data from the Mars Global Surveyor spacecraft, concluded Mars has not cooled to a completely solid iron core; rather its interior is made up of either a completely liquid iron core or a liquid outer core with a solid inner core. Their results are published in the March 7, 2003, online issue of the journal Science.
"Earth has an outer liquid-iron core and solid inner core.
This may be the case for Mars as well," said Dr. Charles Yoder, a planetary scientist at JPL and lead author on the paper. "Mars is influenced by the gravitational pull of the sun. This causes a solid body tide with a bulge toward and away from the sun (similar in concept to the tides on Earth).
However, for Mars this bulge is much smaller, less than one centimeter. By measuring this bulge in the Mars gravity field we can determine how flexible Mars is. The size of the measured tide is large enough to indicate the core of Mars can not be solid iron but must be at least partially liquid," he explained.
The team used Doppler tracking of a radio signal emitted by the Global Surveyor spacecraft to determine the precise orbit of the spacecraft around Mars. "The tidal bulge is a very small but detectable force on the spacecraft. It causes a drift in the tilt of the spacecraft's orbit around Mars of one-thousandth of a degree over a month," said Dr. Alex Konopliv, a planetary scientist at JPL and co-author on the paper.
The researchers combined information from Mars Pathfinder on the Mars precession with the Global Surveyor tidal detection to draw conclusions about the Mars core, according to Dr. Bill Folkner, another co-author on the paper at JPL.
The precession is the slow motion of the spin-pole of Mars as it moves along a cone in space (similar to a spinning top). For Mars it takes 170,000 years to complete one revolution. The precession rate indicates how much the mass of Mars is concentrated toward the center. A faster precession rate indicates a larger dense core compared to a slower precession rate.
In addition to detection of a liquid core for Mars, the results indicate the size of the core is about one-half the size of the planet, as is the case for Earth and Venus, and the core has a significant fraction of a lighter element such as sulfur.
In addition to measuring the Mars tide, Global Surveyor has been able to estimate the amount of ice sublimated, changed directly into a gaseous state, from one pole into the atmosphere and then accreted onto the opposite pole. "Our results indicate the mass change for the southern carbon- dioxide ice cap is 30 to 40 percent larger than the northern ice cap, which agrees well with the predictions of the global atmosphere models of Mars," said Yoder.
The amount of total mass change depends on assumptions about the shape of the sublimated portion of the cap. The largest mass exchange occurs if one assumes the cap change is uniform or flat over the entire cap, while the lowest mass exchange corresponds to a conically shaped cap change.
Mars at JPL
Subscribe To SpaceDaily Express
Mars May Be Much Older Or Younger Than Thought
Buffalo - Jan 24, 2003
Research by a University at Buffalo planetary geologist suggests that generally accepted estimates about the geologic age of surfaces on Mars -- which influence theories about its history and whether or not it once sustained life -- could be way off.
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2016 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service.| | http://www.spacedaily.com/news/mars-general-03b.html |
4.1875 | [/caption]Pierre Janssen was a French astronomer who discovered helium in 1868. He was observing a solar eclipse in India when he noticed the yellow spectral emission lines of the element. An English astronomer by the name of Norman Lockyer observed the same spectra and proposed the name helium after the Greek name for the sun, Helios. Helium can be observed at 587.49 nanometres in the spectrum of the chromosphere of the Sun.
It was first thought that helium could only exist in/on the Sun because the spectral results could not be produced in the lab. That did not stop researchers form looking for it. In 1895, Sir William Ramsay discovered helium after treating cleveite, a uranium mineral, with mineral acids. Ramsay was looking for argon but, after separating nitrogen and oxygen from the gas liberated by sulfuric acid, noticed a bright-yellow line that matched the spectral line observed in the the Sun. Ramsey sent samples of the gas to Sir William Crookes and Sir Norman Lockyer who verified that it was helium. It was independently isolated from cleveite the same year by chemists Per Teodor Cleve and Abraham Langlet in Uppsala, Sweden, who were able to accurately determine its atomic weight. In a bit of irony or opportunity lost, American geochemist William Francis Hillebrand found the element prior to Ramsay’s discovery while testing a sample of the mineral uraninite. He attributed the lines to nitrogen and lost the claim to the discovery in the process.
Several interesting properties of helium have been discovered in the ensuing years. In 1907, Ernest Rutherford and Thomas Royds demonstrated that an alpha particle is actually a helium nucleus. In 1908, helium was first liquefied by Dutch physicist Heike Kamerlingh Onnes by cooling the gas to less than one kelvin. He tried to solidify it by reducing the temperature more but failed because helium does not have a triple point temperature where the solid, liquid, and gas phases are at equilibrium. The element was eventually solidified in 1926 by his student Willem Hendrik Keesom. He managed to do so by subjecting helium to 25 atmospheres of pressure. Helium was one of the first elements to be found to have superfluidity. In 1938, Russian physicist Pyotr Leonidovich Kapitsa discovered that helium-4 has almost no viscosity at temperatures near absolute zero(superfluidity). In 1972, the same phenomenon was observed in helium-3 by American physicists Douglas D. Osheroff, David M. Lee, and Robert C. Richardson.
Here on Universe Today we have a couple of great articles related to helium. One is about the possibility that white dwarfs can merge and form helium stars and the other is about liquid metal helium. Astronomy Cast offers a good episode about the energy spectra that we have been talking about in this article. | http://www.universetoday.com/53563/who-discovered-helium/ |
4.09375 | Introduced SpeciesThe Everglades National Park was established to protect the diverse natural habitats of the region which include freshwater marshes, hardwood hammocks, pinelands, cypress swamps, mangrove swamps, and estuaries. However, despite its status as a national park, the Everglades is threatened by introduced plants and animals.
Introduced species are those organisms that are native to somewhere else that have been introduced to new areas through human activities. Many introduced species have detrimental effects on native flora and fauna due to lack of population controls such as predators and disease. As population numbers grow out of control, these introduced species are often referred to as invasive species. The introduction of species began in the late 1800s and has escalated since that time. These species continue to spread due to a lack of predators and disease, outcompeting native species for food and space.
There are over 200 introduced species of plants that have been documented in the Everglades. These plants, including melaleuca (Melaleuca quinquenervia), Brazilian pepper (Schinus terebinthifolius), Australian pine (Casuarina equisetifolia), and Old World Climbing Fern (Lygodium microphyllum) displace native species and alter the natural habitat.
Also detrimental to the habitats and communities of the Everglades are introduced species of wildlife. People have released unwanted pets into the Everglades including aquarium fishes, pythons, boa constrictors, parakeets, and parrots. Feral hogs also pose a major disturbance within the Everglades by digging native vegetation and disturbing archeological sites.Many species of fish originating from tropical and subtropical regions have been introduced into the freshwaters of the Everglades. Most can tolerate low to moderate salinities, allowing them to become established in brackish water estuaries. These fish have been introduced primarily through aquarium and aquaculture facilities, while some species have been released on purpose in hopes of establishing breeding populations. These fish include the Mayan cichlid (Cichlasoma urophthalmus), walking catfish (Clarias batrachus), Asian swamp eel (Monopterus albus), black acara (Cichlasoma bimaculatum), pike killifish (Belonesox belizanus), blue tilapia (Oreochromis aureus), spotted tilapia (Tilapia mariae), and oscar (Astronotus ocellatus).
For more information, visit: | http://www.flmnh.ufl.edu/southflorida/regions/everglades/introduced-species |
4.21875 | Madrid Teacher Resources
Find Madrid educational ideas and activities
Showing 1 - 20 of 333 resources
Commonly Confused Words Exercise
Accept or except? Advice or advise? Eminent or imminent? Which is which witch? In order to select the correct word to complete 20 sentences, learners get out their dictionaries and check the meaning and usage of the commonly confused pairs.
4th - 6th English Language Arts
Comparing the New Madrid and San Andreas Fault Zones
For this faults worksheet, students use an earthquake reference sheet to find the numbers for a modified Mercalli and Richter scale. They compare the San Andreas Fault zone and the New Madrid fault zone on the United States map. They...
7th - 10th Science
Modal verbs of probability express what could or may happen. The class will look at 15 sentences and then choose which verb of probability fits best in each phrase. Then they write four phrases using accurate verbs in the present tense....
4th - 8th English Language Arts
Big Grammar Book
With this comprehensive language arts resource in your arsenal, you'll never have to look for another grammar worksheet! Whether you're teaching kindergartners how to write the upper- and lower-case letters of the alphabet, or helping...
K - 8th English Language Arts CCSS: Adaptable
Do Journalists Shape or Report the News?
Analyze the presence of negative stereotypes and biased reporting in news media, and how this affects one's understanding of other cultures. Learners read newspaper excerpts and quotes from famous personalities to discuss the power of...
9th - 12th Social Studies & History CCSS: Adaptable
The Great Age of Exploration (1400-1550)
Delve into the Age of Exploration with this activity-packed resource! Complete with a pre-test, discussion questions and quiz for a 30-minute video on the period, map activities, timeline of discoveries, vocabulary, etc. this is a...
7th - 12th Social Studies & History CCSS: Adaptable
One Man’s Terrorist…Another Man’s Freedom Fighter
Why is there no universal definition for terrorism? What tactics and objectives do terrorist groups share? Through an engaging and collaborative activity, as well as using rich informational texts and guided notes, lead your class...
10th - 12th Social Studies & History CCSS: Adaptable | http://www.lessonplanet.com/lesson-plans/madrid |
4.1875 | One hundred years ago, two teams of explorers raced to be the first to reach the South Pole. Roald Engelbregt Gravning Amundsen reached the South Pole on December 14, 1911.
Thirty-three days later on 17 January 1912 the Terra Nova Expedition led by Robert Falcon Scott arrived at the Pole in second place. At the same time in East Antarctica, the Australasian Antarctic Expedition led by Douglas Mawson was searching for the South Magnetic Pole.
On their expeditions for King and country, Scott and Mawson carried out some of the first scientific studies in Antarctica. Scott's ill-fated expedition found fossils of Gondwanaland trees showing that Antarctica was once covered in lush forests.
Even today, we tend to think of Antarctica as the last untouched wilderness preserved from human impact by International Treaty. However, despite its remoteness and vastness it is still affected by anthropogenic climate change.
A paper to appear in the January issue of Global Change Biology shows how the dominant plants in Antarctica have been affected by modern climate change. In a handful of coastal Antarctic 'oases' void of permanent ice cover, lush moss beds grow during the short summer season from December to February using melt water from streams and lakes. Up until now, measuring the seasonal growth rate of these plants has been extremely difficult and hence it was impossible to assess the impact of our changing climate.
This research, conducted by a team of environmental scientists from the University of Wollongong (UOW) and nuclear physicists from the Australian Nuclear Science and Technology Organisation (ANSTO), shows how the increased concentration of radiocarbon in the atmosphere resulting from nuclear weapons testing (mostly in the late 1950s and early 1960s, called the 'the bomb spike') can be used to accurately date the age of the moss shoots along their stems in a similar way to tree-rings.
Professor Sharon Robinson from UOW's Institute for Conservation Biology and Environmental Management (School of Biological Sciences) said the team found that that most of the plants were growing 50 years ago when nuclear testing was at its peak.
In some species the peak of the radiocarbon bomb spike was found just 15 mm from the top of the 50 mm shoot suggesting that these plants may be more than 100 years old.
'Accurate dating along the moss stem allows us to determine the very slow growth rates of these mosses (ranging from 0.2 to 3.5 mm per year). Remarkably, these plants were already growing during the heroic age of Antarctic exploration. In terms of age these mosses are effectively the old growth forests of Antarctica -- in miniature," Professor Robinson said.
Although increased temperature and precipitation in the polar regions due to climate change are predicted to increase growth rates, the scientists found that at some sites growth rates have declined since the 1980s. They suggest that this is likely due to moss beds drying out, which appears to be caused by increased wind speeds around Antarctica that are linked to the Antarctic ozone hole.
In the 100 years since the start of scientific research in Antarctica, contamination of Earth's atmosphere with increased radioactivity due to nuclear weapons testing has led to radiocarbon labelling of Antarctic plants.
"This has allowed scientists to show that climate change has made the driest continent on Earth an even harsher environment for plant life," Professor Robinson said.
|Contact: Sharon Robinson| | http://www.bio-medicine.org/biology-news-1/Climate-change-stunting-growth-of-century-old-Antarctic-moss-shoots-22643-1/ |
4.09375 | A graben is a valley with a distinct escarpment on each side caused by the displacement of a block of land downward. Graben often occur side-by-side with horsts. Horst and graben structures indicate tensional forces and crustal stretching.
Graben are produced from parallel normal faults, where the displacement of the hanging wall is downward, while that of the footwall is upward. The faults typically dip toward the center of the graben from both sides. Horsts are parallel blocks that remain between graben; the bounding faults of a horst typically dip away from the center line of the horst.
Single or multiple graben can produce a rift valley.
In many rifts the graben are asymmetric, with a major fault along only one of the boundaries, and these are known as half-graben. The polarity (throw direction) of the main bounding faults typically alternate along the length of the rift. The asymmetry of a half-graben strongly affects syntectonic deposition. Comparatively little sediment enters the half-graben across the main bounding fault, due to the effects of footwall uplift on the drainage systems. The exception is at any major offset in the bounding fault, where a relay ramp may provide an important sediment input point. Most of the sediment will enter the half-graben down the unfaulted hanging wall side (e.g. Lake Baikal).
- The Basin and Range Province of southwestern North America is an example of multiple horst/graben structures, including Death Valley, with Salt Lake Valley being the easternmost and Owens Valley being the westernmost.
- The Rio Grande Rift Valley in Colorado/New Mexico/Texas of the United States
- The Rhine valley to the north of Basel, Switzerland
- The Oslo graben around Oslo, Norway
- The East African Rift Valley
- The Saguenay Graben, Quebec, Canada
- The Narmada River valley in central India
- The lower Godavari River valley in southern India
- The Ottawa-Bonnechere Graben in Ontario and Quebec, Canada
- The Lambert Graben in Antarctica
- Gulf St Vincent in South Australia, Australia
- The Guanabara Bay in Rio de Janeiro, Brazil
- The Central Lowlands (Midland Valley) of Scotland
- Baikal Rift Zone, Siberia, Russia
- Lake Tahoe, California and Nevada, US
- Santa Clara Valley, California, US
- Guatemala city valley, Guatemala, GT
- Büyük Menderes Graben, Turkey
- The Unzen Graben in Japan
- The Republic Graben in Republic, Washington | https://en.wikipedia.org/wiki/Graben |
4.03125 | One of the biggest knocks against cellphones is they require small amounts of rare earth elements: gallium, indium and arsenic, for example, that are both scarce and expensive. But what if you could make a phone out of a more common element, like carbon?
Researchers are taking slow but sure steps toward building the innards of a cellphone out of carbon nanotubes, a structure that resembles a microscopic sheet of chicken wire rolled into a cylinder. These cylinders can be used to either conduct electricity or store energy.
At the Technical University of Denmark, Jakob Wagner and colleagues have found a better way to build carbon nanotubes that could lead to their use as a semiconductor, a key component of all electronic circuit parts found in both cellphones and laptops. Carbon nanotubes have properties of both a metal and a semiconductor, depending on how they are rolled.
“The breakthrough here is that we are able to control the production of nanotubes whether they are metallic or semiconducting,” Wagner said. “That’s important because if you want to use them in cellphones, we have to make sure they are either one or the other. The prospect is to use semiconducting carbon nanotubes as a substitute for gallium.”
Warner published his work earlier this month in the Nature publication Scientific Reports.
The next step is to be able to produce large amounts of semiconducting carbon nanotubes that could be made into an electronic device, Wagner said.
“It will not be tomorrow, let’s say 10 years,” he said.
But at IBM, researchers like James Hannon are working to speed up that lab-to-prototype timescale. Hannon says that Wagner’s finding is an important step, but it needs to be replicated on larger-diameter carbon nanotubes.
"This is a nice scientific demonstration, but not in the range that would be used in a logic application," said Hannon, manager of IBM’s carbon electronics group in Yorktown Heights, N.Y. "I’d like to see if this technique could work for larger diameter tubes as well."
Last year, Hannon and his IBM colleagues announced they had built memory and microprocessing chips using carbon nanotubes. He said the tough thing is getting them to lie down in straight lines, but they overcame this obstacle by creating special grooves etched into the silicon chip surface and a bonding agent.
Hannon says the two challenges with carbon nanotubes is figuring out how to place them and how to separate the semiconducting ones from the metallic ones, which are thrown away. A separate team at North Carolina State University recently reported they were able to integrate carbon nanotubes into a flexible scaffold for a silicon-based battery that would last longer than existing lithium ion batteries.
Hannon says he expects carbon nanotubes to play a big role in electronic devices in a few more years of testing.
"Our mandate is that this stuff has to be ready pretty soon,” he said.
Image: Flickr, Gonzalo Baeza H
- Where Did Those Gravitational Waves Come From? There's a Map
- New Heart-Shaped Hawaiian Fruits Discovered
- DNews: Gravitational Waves Confirmed: A Historic Discovery
- Fish Can Sense Touch with Their Fins
This article originally published at Discovery News here | http://mashable.com/2013/08/30/carbon-based-cellphones/ |
4 | Redistribution of income and wealth
Redistribution of income and redistribution of wealth are respectively the transfer of income and of wealth (including physical property) from some individuals to others by means of a social mechanism such as taxation, charity, welfare, land reform, monetary policies, confiscation, divorce or tort law. The term typically refers to redistribution on an economy-wide basis rather than between selected individuals, and it always refers to redistributions from those who have more to those who have less.
The desirability and effects of redistribution are actively debated on ethical and economic grounds. The subject includes analysis of its rationales, objectives, means, and policy effectiveness. A 2003 survey among American economists found that 71.2% of them support redistribution, while 20.4% oppose it, 7.2% had mixed feelings.
The concept of wealth redistribution is old, and goes back as far as recorded human history. In ancient times, this was known as a Palace economy. These economies were centrally based around the administration, so the dictator or pharoah had both the ability and the right to say who did(and did not) get special treatment.
Another early form of wealth redistribution occurred in the early American colonies under the leadership of William Bradford. Bradford records in his diary that this "common course" bred confusion, discontent, distrust, and the colonists looked upon it as a form of slavery.
Role in economic systems
Different types of economic systems feature vastly different levels of interventionism to redistribute income, depending on how unequal the initial distribution of income in their economies is. Free-market capitalist economies tend to feature high degrees of income redistribution, but Japan's government engages in much less redistribution because its initial wage distribution is much more equal. Likewise, the socialist planned economies of the former Soviet Union and Eastern bloc had very little income redistribution because private capital and land income, the major drivers of income inequality in capitalist systems, did not exist in these economies; and the government set wages in these economies.
Modern forms of redistribution
Today, income redistribution occurs in some form in most democratic countries. In a progressive income tax system, a high income earner will pay a higher tax rate than a low income earner. Another taxation-based method of redistributing income is the negative income tax.
Two other common types of governmental redistribution of income are subsidies and vouchers (such as food stamps). These transfer payment programs are funded through general taxation, but benefit the poor, who pay fewer or no taxes. While the persons receiving transfers from such programs may prefer to be directly given cash, these programs may be more palatable to society than cash assistance, as they give society some measure of control over how the funds are spent.
Wealth redistribution can be implemented through land reform that transfers ownership of land from one category of people to another, or through inheritance taxes or direct wealth taxes. Before-and-after Gini coefficients for the distribution of wealth can be compared.
The objectives of income redistribution are to increase economic stability and opportunity for the less wealthy members of society and thus usually include the funding of public services.
One basis for redistribution is the concept of distributive justice, whose premise is that money and resources ought to be distributed in such a way as to lead to a socially just, and possibly more financially egalitarian, society. Another argument is that a larger middle class benefits an economy by enabling more people to be consumers, while providing equal opportunities for individuals to reach a better standard of living. Seen for example in the work of John Rawls, another argument is that a truly fair society would be organized in a manner benefiting the least advantaged, and any inequality would be permissible only to the extent that it benefits the least advantaged.
Some[who?] argue that wealth and income inequality are a cause of economic crises, and that reducing these inequalities is one way to prevent or ameliorate economic crises, with redistribution thus benefiting the economy overall. This view was associated with the underconsumptionism school in the 19th century, now considered an aspect of some schools of Keynesian economics; it has also been advanced, for different reasons, by Marxian economics. It was particularly advanced in the US in the 1920s by Waddill Catchings and William Trufant Foster. There is currently a great debate concerning the extent to which the world's extremely rich have become richer over recent decades. Thomas Piketty's Capital in the Twenty-First Century is at the forefront, critiqued in certain publications such as The Economist.
|This section requires expansion. (November 2015)|
Economic effects of inequality
Using statistics from 23 developed countries and the 50 states of the US, British researchers Richard G. Wilkinson and Kate Pickett show a correlation between income inequality and higher rates of health and social problems (obesity, mental illness, homicides, teenage births, incarceration, child conflict, drug use), and lower rates of social goods (life expectancy, educational performance, trust among strangers, women's status, social mobility, even numbers of patents issued per capita), on the other. The authors argue inequality leads to the social ills through the psychosocial stress, status anxiety it creates.
A 2011 report by the International Monetary Fund by Andrew G. Berg and Jonathan D. Ostry found a strong association between lower levels of inequality and sustained periods of economic growth. Developing countries (such as Brazil, Cameroon, Jordan) with high inequality have "succeeded in initiating growth at high rates for a few years" but "longer growth spells are robustly associated with more equality in the income distribution."
The socialist economists John Roemer and Pranab Bardhan criticize redistribution via taxation in the context of Nordic-style social democracy, highlighting its limited success at promoting relative egalitarianism and its lack of sustainability. They point out that social democracy requires a strong labor movement to sustain its heavy redistribution, and that it is unrealistic to expect such redistribution to be feasible in countries with weaker labor movements. They point out that, even in the Scandinavian countries, social democracy has been in decline since the labor movement weakened. Instead, Roemer and Bardham argue that changing the patterns of enterprise ownership and market socialism, obviating the need for redistribution, would be more sustainable and effective at promoting egalitarianism.
Marxian economists argue that social democratic reforms - including policies to redistribute income - such as unemployment benefits and high taxes on profits and the wealthy create more contradictions in capitalism by further limiting the efficiency of the capitalist system via reducing incentives for capitalists to invest in further production. In the Marxist view, redistribution cannot resolve the fundamental issues of capitalism - only a transition to a socialist economy can.
- Economic policy
- Poverty reduction
- Robin Hood
- Robin Hood tax
- Social inequality
- Redistribution (cultural anthropology)
- Wealth concentration
- "Redistribution". Stanford Encyclopedia of Philosophy. Stanford University. 2 July 2004. Retrieved 13 August 2010.
The social mechanism, such as a change in tax laws, monetary policies, or tort law, that engenders the redistribution of goods among these subjects
- F.A. Cowell ( 2008). "redistribution of income and wealth,"The New Palgrave Dictionary of Economics, 2nd Edition, TOC.
- Rugaber, Christopher S.; Boak, Josh (January 27, 2014). "Wealth gap: A guide to what it is, why it matters". AP News. Retrieved January 27, 2014.
- Klein, Daniel B.; Stern, Charlotta (2006). "Economists’ policy views and voting" (PDF). Public Choice (Springer) 126 (3-4): 337. doi:10.1007/s11127-006-7509-6.
- de Blois, Lukas; R.J. van der Spek; Susan Mellor (translator) (1997). An Introduction to the Ancient World. Routledge. pp. 56–60. ISBN 0-415-12773-4. Cite uses deprecated parameter
- William Bradford
- History of Plymouth Plantation, p. 135
- Rosser, Mariana V. and J Barkley Jr. (July 23, 2003). Comparative Economics in a Transforming World Economy. MIT Press. p. 11. ISBN 978-0262182348.
Economies vary based on the extent to which and the methods by which governments intervene to redistribute income. This depends partly on how unequal income is to begin with before any redistributive policies are implemented. Thus the Japanese government does much less redistributing than the governments of many other capitalist countries because Japan has a more equal distribution of wages than most other capitalist countries. Command socialist economies also have had less income redistribution because governments initially control the distribution of income by setting wages and forbidding capital or land income.
- Harvey S. Rosen & Ted Gayer, Public Finance pp. 271–72 (2010).
- Marx, K. A Contribution to the Critique of Political Economy. Progress Publishers, Moscow, 1977
- (Dorfman 1959)
- Allgoewer, Elisabeth (May 2002). "Underconsumption theories and Keynesian economics. Interpretations of the Great Depression" (PDF). Discussion paper no. 2002-14. University of St. Gallen.
- Forget the 1%; Free Exchange, The Economist, 8 November 2014, p79.
- Famine, Affluence, and Morality
- Fighting Poverty
- Statistics and graphs from Wilkinson and Pickett research.
- The Spirit Level: how 'ideas wreckers' turned book into political punchbag| Robert Booth| The Guardian| 13 August 2010
- Inequality and Unsustainable Growth: Two Sides of the Same Coin? Andrew G. Berg and Jonathan D. Ostry| IMF STAFF DISCUSSION NOTE | April 8, 2011
- Berg, Andrew G.; Ostry, Jonathan D. (2011). "Equality and Efficiency". Finance and Development (International Monetary Fund) 48 (3). Retrieved September 10, 2012.
- Plotnick, Robert (1986) "An Interest Group Model of Direct Income Redistribution", The Review of Economics and Statistics, vol. 68, #4, pp. 594–602.
- Market socialism, a case for rejuvenation, by Pranab Bardhan and Johen E. Roemer. 1992. Journal of Economic Perspectives, Vol. 6, No. 3, pp. 104: "Since it (social democracy) permits a powerful capitalist class to exist (90 percent of productive assets are privately owned in Sweden), only a strong and unified labor movement can win the redistribution through taxes that is characteristic of social democracy. It is idealistic to believe that tax concessions of this magnitude can be effected simply through electoral democracy without an organized labor movement, when capitalists organize and finance influential political parties. Even in the Scandinavian countries, strong apex labor organizations have been difficult to sustain and social democracy is somewhat on the decline now."
- Market Socialism: The Debate Among Socialists, by Schweickart, David; Lawler, James; Ticktin, Hillel; Ollman, Bertell. 1998. (P.60-61): "The Marxist answers that...it involves limiting the incentive system of the market through providing minimum wages, high levels of unemployment insurance, reducing the size of the reserve army of labour, taxing profits, and taxing the wealthy. As a result, capitalists will have little incentive to invest and the workers will have little incentive to work. Capitalism works because, as Marx remarked, it is a system of economic force (coercion)."
- Levy, Frank (2008). "Distribution of Income". In David R. Henderson (ed.). Concise Encyclopedia of Economics (2nd ed.). Indianapolis: Library of Economics and Liberty. ISBN 978-0865976658. OCLC 237794267.
- Small calculus of inequality measures | https://en.wikipedia.org/wiki/Property_redistribution |
4.21875 | Discover what myths reveal about ancient and contemporary cultures.
- Grades: PreK–K, 1–2, 3–5, 6–8, 9–12
Describes a lesson in identifying and charting the characteristics of a myth through reading and making inferences.
Allow your class to take a journey into the world of Greek myths. Students learn vocabulary from Ancient Greece that will help them to understand roots of modern English words.
Mythology is not my strong suit, so when I stumbled across a Greek mythology readers theater book, I was ecstatic. Read on to find out how to incorporate this activity into your classroom.
Presents a lesson using a mythological hero chart. Students chart character traits based on readings of Ancient Greek myths.
Proposes a lesson in which students write journalist pieces about events from Greek myths.
In this lesson unit on ancient Greece, students compare three myths and create their own original myth.
Students combine their journalistic skills with their knowledge of Greek myth to write a fictional article about mythical characters taking over modern L.A.
Online Learning Activities
This four-step workshop hosted by an award-winning myth writer offers writing strategies and exercises to help students craft successful myths.
In this blend of Greek mythology and modern pop culture, things become clearer when Percy discovers he is the son of Poseidon. But trouble starts all over again when Percy is sent on a quest to prevent war on Mount Olympus.
Three exciting learning activities to complement author Rick Riordan's modern myth
Use these 15 questions to help students get more out of the experience of reading Rick Riordan's book, Sea of Monsters.
Two drawing assignments and a memory activity to follow reading the book by Rick Riordan
Students choose a myth from Mary Pope Osbourne's book to dramatize.
Booktalk for author Ross Collins' Medusa Jones, the story of a young outcast named Medusa, complete with hilarious illustrations!
Find folk tales and legends from all over the world in this book list for grades PreK-5.
These books for middle- and high-school students put a new spin on fairy tales and legends. | http://www.scholastic.com/teachers/collection/myths |
4.15625 | Our galaxy's dark matter is clumpier than once thought, according to a new computer simulation.
The model, created by one of the most powerful supercomputers in the world, shows that the spherical halo of dark matter that envelopes the Milky Way contains dense clumps and streams of the mysterious stuff, even in the neighborhood of our solar system.
"In previous simulations, this region came out smooth, but now we have enough detail to see clumps of dark matter," said researcher Piero Madau, an astrophysicist at the University of California, Santa Cruz.
Dark matter, which scientists can only detect by noting its gravitational effect, is thought to make up about 85 percent of the matter in the universe. Its composition remains a mystery, though some scientists think it's made up of hypothetical particles called WIMPs (weakly interacting massive particles), which could annihilate each other and emit gamma rays when they collide.
The new simulation, described in the Aug. 7 issue of the journal Nature, implies that dark matter could be detected by the recently launched Gamma-ray Large Area Space Telescope (GLAST).
"That's what makes this exciting," Madau said. "Some of those clumps are so dense they will emit a lot of gamma rays if there is dark matter annihilation, and it might easily be detected by GLAST."
So far, though many teams have been looking for WIMP particles, no one has conclusively detected them.
"There are several candidate particles for cold dark matter, and our predictions for GLAST depend on the assumed particle type and its properties," said Juerg Diemand, a postdoctoral fellow at UCSC who led the new research. "For typical WIMPs, anywhere from a handful to a few dozen clear signals should stand out from the gamma-ray background after two years of observations. That would be a big discovery for GLAST."
The model took about one month to run on the Jaguar supercomputer at Oak Ridge National Laboratory in Tennessee. By following the gravitational interactions of more than a billion parcels of dark matter over 13.7 billion years, the computer could predict how the dark matter in the universe developed over time based on leading theories of how dark matter interacts.
"It simulates the dark matter distribution from near the time of the Big Bang until the present epoch, so practically the entire age of the universe, and focuses on resolving the halo around a galaxy like the Milky Way," Diemand said.
The research was funded by the U.S. Department of Energy, NASA and the Swiss National Science Foundation.
- Video: Dark Matter in 3-D
- Vote: The Strangest Things in Space
Mysteries: Where is the Rest of the Universe? | http://www.space.com/5705-milky-dark-matter-clumpier-thought.html |
4.0625 | Definition of Saxon in English:
1A member of a people that inhabited parts of central and northern Germany from Roman times, many of whom conquered and settled in much of southern England in the 5th-6th centuries.
- There was relative peace with British rule over the western half of the country and Germanic rule in the east for the next fifty years, and it seems likely that the Britons may even have regained some areas of central England from the Saxons.
- Faced with invasion by a coalition of Picts and Saxons, the Roman citizens of Britain appeal to the Emperor for help; but Honorius is in no position to aid them.
- When Charlemagne conquered the Saxons, he extended his empire to the borders of Viking realms: specifically, to Friesland in southern Denmark.
1.1A native of modern Saxony in Germany.
adjectiveBack to top
1Relating to the Anglo-Saxons, their language (Old English), or their period of dominance in England (5th-11th centuries).
- Wales is contiguous to England and had been the subject of Saxon raids for centuries.
- For much of the Saxon period it was probably fairly wide and marshy, perhaps acting as a separator between Westwyk and Conesford.
- Across much of midland England wide-ranging changes took place in the countryside in the late Saxon period.
1.1Relating to or denoting the style of early Romanesque architecture preceding the Norman in England.
- The site develops with the construction of an aisled Late Saxon timber hall, which was one of King Cnut's royal manors.
- Within the church, parts of the Saxon north wall can be seen above the Norman arcade.
- On the outside of the north wall, (about a third of the way down the Nave), the remains of a Saxon doorway can be seen, complete with round headed arch and jambs of flint.
Pronunciation: /ˈsaks(ə)nʌɪz/(also Saxonise) verb
Words that rhyme with Saxonflaxen, Jackson, klaxon, Sachsen, waxen
Definition of Saxon in:
- US English dictionary
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed. | http://www.oxforddictionaries.com/definition/english/saxon |
4.28125 | This concept introduces students to inductive reasoning and provides many examples of inductive reasoning.
Inductive Reasoning from Patterns Interactive
This video gives more detail about the mathematical principles presented in Inductive Reasoning.
This video shows how to work step-by-step through one or more of the examples in Inductive Reasoning.
A list of student-submitted discussion questions for Inductive Reasoning from Patterns.
To activate prior knowledge, make personal connections, reflect on key concepts, encourage critical thinking, and assess student knowledge on the topic prior to reading using a Quickwrite.
To stress understanding of a concept by summarizing the main idea and applying that understanding to create visual aids and generate questions and comments using a Concept Matrix.
To activate prior knowledge, to generate questions about a given topic, and to organize knowledge using a KWL Chart.
Learn how inductive reasoning is used throughout the sciences, from medicine to zoology.
Symbolic notation used in logic, inductive reasoning (patterns), and deductive reasoning are the focus of this study guide. | http://www.ck12.org/geometry/Inductive-Reasoning-from-Patterns/ |
4.03125 | Brood parasites are organisms that rely on others to raise their young. The strategy appears among insects, fishes, and birds. The brood parasite manipulates a host, either of the same or of another species, to raise its young as if it were its own.
Brood parasitism relieves the parasitic parents from the investment of rearing young or building nests for the young, enabling them to spend more time on other activities such as foraging and producing further offspring. Bird parasite species mitigate the risk of egg loss by distributing eggs amongst a number of different hosts. As this behaviour damages the host, it often results in an evolutionary arms race between parasite and host.
- 1 Birds
- 2 Parental-care parasitism
- 3 Fish
- 4 Insects
- 5 See also
- 6 References
- 7 External links
In many monogamous bird species, there are extra-pair matings resulting in males outside the pair bond siring offspring and used by males to escape from the parental investment in raising their offspring. This form of cuckoldry is taken a step further when females of the goldeneye, Bucephala clangula often lay their eggs in the nests of other individuals. Intraspecific brood parasitism is seen in a number of duck species, where females often lay their eggs in the nests of others.
Interspecific brood-parasites include the Old World cuckoo, cowbirds, black-headed ducks, and some New World cuckoos in the Americas, and indigobirds, whydahs, and honeyguides in Africa. Seven independent origins of obligate interspecific brood parasitism in birds have been proposed. While there is still some controversy over when and how many origins of interspecific brood parasitism have occurred, recent phylogenetic analyses suggest two origins in Passeriformes (once in New World cowbirds: Icteridae, and once in African Finches: Viduidae); three origins in Old World and New World cuckoos (once in Cuculinae, Phaenicophaeinae, and in Neomorphinae-Crotophaginae); a single origin in Old World honeyguides (Indicatoridae); and in a single species of waterfowl, the black-headed duck (Heteronetta atricapilla).
Most avian brood parasites are specialists which will only parasitize a single host species or a small group of closely related host species, but four out of the five parasitic cowbirds are generalists, which parasitize a wide variety of hosts; the brown-headed cowbird has 221 known hosts. They usually only lay one egg per nest, although in some cases, particularly the cowbirds, several females may use the same host nest.
The common cuckoo presents an interesting case in which the species as a whole parasitizes a wide variety of hosts, including the reed warbler and dunnock, but individual females specialize in a single species. Genes regulating egg coloration appear to be passed down exclusively along the maternal line, allowing females to lay mimetic eggs in the nest of the species they specialize in. Females generally parasitize nests of the species which raised them. Male common cuckoos will fertilize females of all lines, maintaining sufficient gene flow among the different maternal lines.
The mechanisms of host selection by female cuckoos are somewhat unclear, though several hypotheses have been suggested in attempt to explain the choice. These include genetic inheritance of host preference, host imprinting on young birds, returning to place of birth and subsequently choosing a host randomly ("natal philopatry"), choice based on preferred nest site (nest-site hypothesis), and choice based on preferred habitat (habitat-selection hypothesis). Of these hypotheses the nest-site selection and habitat selection have been most supported by experimental analysis.
Adaptations for parasitism
Among specialist avian brood parasites, mimetic eggs are a nearly universal adaptation. There is even some evidence that the generalist brown-headed cowbird may have evolved an egg coloration mimicking a number of their hosts.
Most avian brood parasites will remove a host egg when they lay one of their own in a nest. Depending upon the species, this can happen either in the same visit to the host nest or in a separate visit before or after the parasitism. This both prevents the host species from realizing their nest has been parasitized and reduces competition for the parasitic nestling once it hatches.
Most avian brood parasites have very short egg incubation periods and rapid nestling growth. This gives the parasitic nestling a head start on growth over its nestmates, allowing it to outcompete them. In many brood parasites, such as cuckoos and honeyguides, this short egg incubation period is due to internal incubation periods up to 24 hours longer in cuckoos than hosts. Some non-parasitic cuckoos also have longer internal incubation periods, suggesting that this longer internal incubation period was not an adaptation following brood parasitism, but predisposed birds to become brood parasites. In cases where the host nestlings are significantly smaller than the parasite nestling, the host nestlings will often starve to death. Some brood parasites will eliminate all their nestmates shortly after hatching, either by ejecting them from the nest or killing them with sharp mandible hooks which fall off after a few days.
It has often been a question as to why the majority of the hosts of brood parasites care for the nestlings of their parasites. Not only do these brood parasites usually differ significantly in size and appearance, but it is also highly probable that they reduce the reproductive success of their hosts. The "mafia hypothesis" evolved through studies in an attempt to answer this question. This hypothesis revolves around host manipulations induced by behaviors of the brood parasite. Upon the detection and rejection of a brood parasite's egg, the host's nest is depredated upon, its nest destroyed and nestlings injured or killed. This threatening response indirectly enhances selective pressures favoring aggressive parasite behavior that may result in positive feedback between mafia-like parasites and compliant host behaviors.
There are two avian species that have been speculated to portray this mafia-like behavior: the brown-headed cowbird of North America, Molothrus ater, and the great spotted cuckoo of Europe, Clamator glandarius. The great spotted cuckoo lays the majority of its eggs in the nests of the European magpie, Pica pica. It has been observed that the great spotted cuckoo repeatedly visits the nests that it has parasitised, a precondition for the mafia hypothesis. An experiment was run by Soler et al. from April to July 1990 – 1992 in the high-altitude plateau Hoya de Guadix, Spain. They observed the effects of the removal of cuckoo eggs on the reproductive success of the magpie and measured the magpie's reaction; the egg was considered accepted if it remained in the nest, ejected if gone in between visits, or abandoned if the eggs were present but cold. If any nest contents were gone between consecutive visits, the nests were considered to have been depredated. The magpie's reproductive success was measured by number of nestlings that survived to their last visit, which was just before the nestling had been predicted to fledge from the nest. The results from these experiments show that after the removal of the parasitic eggs from the great spotted cuckoo, these nests are predated at much higher rates than those where the eggs were not removed. Through the use of plasticine eggs that model those of the magpie, it was confirmed that the nest destruction was caused by the great spotted cuckoo. This destruction benefits the cuckoo, for the possibility of re-nesting by the magpie allows another chance for the cuckoo egg to be accepted.
Another similar experiment was done in 1996–2002 by Hoover et al. on the relationship between the parasitic brown-headed cowbird and a host, the prothonotary warbler, Protonotaria citrea. In their experiment, researchers manipulated the cowbird egg removal and cowbird access to the predator proof nests of the warbler. They found that 56% of egg ejected nests were depredated upon in comparison to 6% of non-ejected nests when cowbirds were not prevented from getting to the hosts' nest. Of the nests that were rebuilt by hosts that had previously been predated upon, 85% of those were destroyed. The number of young produced by the hosts that ejected eggs dropped 60% compared to those that accepted the cowbird eggs.
In this hypothesis, female cuckoos select a group of host species with similar nest sites and egg characteristics to her own. This population of potential hosts is monitored and a nest is chosen from within this group.
Research of nest collections has illustrated a significant level of similarity between cuckoo eggs and typical eggs of the host species. A low percentage of parasitized nests were shown to contain cuckoo eggs not corresponding to the specific host egg morph. In these mismatched nests a high percent of the cuckoo eggs were shown to correlate to the egg morph of another host species with similar nesting sites. This has been pointed to as evidence for nest- site selection.
A criticism of the hypothesis is that it provides no mechanism by which nests are chosen, or which cues might be used to recognize such a site.
Parental-care parasitism emphasizes the relationship between the host and the parasite in brood parasitism. Parental-care parasitism occurs when individuals raise offspring of other unrelated individuals. The host are the parents of offspring and the parasites are individuals who take advantage of either the nest or eggs within the family construct. Such dynamics occur when the parasites attempt to reduce their parental investment so they can invest the extra energy into other endeavors.
Cost of the hosts
Given the detrimental effects avian brood parasites can have on their hosts' reproductive success, host species have come up with various defenses against this unique threat. Given that the cost of egg removal concurrent with parasitism is unrecoverable, the best defense for hosts is avoiding parasitism in the first place. This can take several forms, including selecting nest sites which are difficult to parasitize, starting incubation early so they are sitting on the nests when parasites visit them early in the morning, and aggressive territorial defense. Birds nesting in aggregations can also benefit from group defense.
The hosts reject offspring
The host may be the one that ultimately ends up raising offspring after they return from foraging. Once parasitism has occurred, the next most optimal defense is to eject the parasitic egg. According to parental investment theory, the host can possibly adopt some defense to protect their own eggs if they distinguish which eggs are not theirs. Recognition of parasitic eggs is based on identifying pattern differences or changes in the number of eggs. This can be done by grasp ejection if the host has a large enough beak, or otherwise by puncture ejection. Ejection behavior has some costs however, especially when host species have to deal with mimetic eggs. In that case, hosts will inevitably mistake one of their own eggs for a parasite egg on occasion and eject it. In any case, hosts will sometimes damage their own eggs while trying to eject a parasite egg.
Among hosts not exhibiting parasitic egg ejection, some will abandon parasitized nests and start over again. However, at high enough parasitism frequencies, this becomes maladaptive as the new nest will most likely become reparasitized. Other behavior can include modifying the nest to exclude the parasitic egg, either by weaving over the egg or in some cases rebuilding a new nest over the existing one. For instance, American coots might kick the parasites’ eggs out, or build a new nest beside the brood nests where the parasites’ babies starve to death due to lack of food.
Cost of the parasites
While parental-care parasitism significantly increased the breeding number of the parasite, only about half of the parasite eggs survived. Parasitism for the individual (the brood parasite) also has significant drawbacks. As an example, the parasitic offspring of the bearded tits, Panurus biarmicus, compared to the offspring in non-parasitic nests, tend to develop much more slowly and often don’t reach full maturity. Parasitic females however can adopt either floater traits or nesting traits. Floater females are entirely dependent on others to raise their eggs because they do not have their own nests. Hence, they reproduce significantly less because the hosts reject their ‘intruder’ eggs or they may just miss the egg-laying period of the bird they are trying to pass their eggs to. Nesting females who have their own nests may also be parasitic due to temporary situations like sudden loss of nests, or they lay surplus eggs, which overload their parental care ability.
The hosts raise offspring
Sometimes hosts are completely unaware that they are caring for a bird that is not their own. This most commonly occurs because the host cannot differentiate the parasitic eggs from their own. It may also occur when hosts temporarily leave the nest after laying the eggs. The parasites lay their own eggs into these nests so their nestlings share the food provided by the host. It may occur in other situations. For example, female eiders would prefer to lay eggs in the nests with one or two existing eggs of others because the first egg is the most vulnerable to predators. The presence of others’ eggs reduces the probability that a predator will attack her egg when a female eider leaves the nest after laying the first egg.
Sometimes, the parasitic offspring kills the host nest-mates during competition for resources. As an example, the parasite offspring of the cowbird chick kill the host nest-mates if food intake for each of them is low, but do not do so if the food intake is adequate, as a result of their interactions with co-inhabitants of the nest.
A mochokid catfish of Lake Tanganyika, Synodontis multipunctatus, is a brood parasite of several mouthbrooding cichlid fishes. The catfish eggs are incubated in the host's mouth, and in the manner of cuckoos hatch before the host's own eggs. The young catfish eat the host fry inside the host's mouth, effectively taking up virtually the whole of the host's parental investment.
A cyprinid minnow, Pungtungia herzi is a brood parasite of the Serranid freshwater perch Siniperca kawamebari, which live in the south of the Japanese islands of Honshu, Kyushu and Shikoku, and in South Korea. Host males guard territories against intruders during the breeding season, creating a patch of reeds as a spawning site or "nest". Females (one or more per site) visit the site to lay eggs, which the male then defends. The parasite's eggs are smaller and stickier than the host's. 65.5% of host sites were parasitised in a study area.
There are many different types of cuckoo bees, all of which lay their eggs in the nest cells of other bees, but they are normally referred to as kleptoparasites, rather than as brood parasites, because the immature stages are almost never fed directly by the adult hosts. Examples of cuckoo bees are Coelioxys rufitarsis, Melecta separata, Bombus bohemicus, Nomada and Epeoloides.
Kleptoparasitism in insects is not restricted to bees; several lineages of wasp including most of the Chrysididae, the cuckoo wasps, are kleptoparasites. The cuckoo wasps lay their eggs in the nests of other wasps, such as those of the potters and mud daubers.
Among the few exceptions, which are indeed fed by adult hosts, are cuckoo bumblebees in the subgenus Psithyrus. Their queens kill and replace the existing queen of a colony of the host species then use the host workers to feed their brood.
An example of a true brood-parasitic wasp is Polistes sulcifer. This species of paper wasp has lost the ability to build their own nests, and relies on its host species, Polistes dominula, to raise its brood, with the adult hosts feeding the parasite larvae directly, unlike typical kleptoparasitic insects.
In the bee species of Euglossa cordata, dominant reproductive females will display brood parasitism by replacing her daughter’s eggs with her own eggs, diverting her resources from producing grand-offspring to producing more of her own offspring. In addition, to increase her longevity and fecundity, a mother will also eat her daughter’s eggs to gain more nutrients.
Host insects are sometimes tricked into bringing offspring of another species into their own nests, as is the case with the parasitic butterfly, Phengaris rebeli, and the host ant Myrmica schencki. The butterfly larvae release chemicals that confuse the host ant into believing that the P. rebeli larvae are actually ant larvae. Thus, the M. schencki ants bring back the P. rebeli larvae to their nests.
- David Attenborough (1998) . The Life of Birds. New Jersey: Princeton University Press. p. 246. ISBN 0-691-01633-X.
- Payne, R. B. 1997. Avian brood parasitism. In D. H. Clayton and J. Moore (eds.), Host-parasite evolution: General principles and avian models, 338–369. Oxford University Press, Oxford.
- Rothstein, S.I (1990). "A model system for coevolution: avian brood parasitism". Annual Review of Ecology and Systematics 21: 481–508. doi:10.1146/annurev.ecolsys.21.1.481.
- Stephen M. Yezerinac, Patrick J. Weatherhead 1997. Extra-Pair Mating, Male Plumage Coloration and Sexual Selection in yellow warblers (Dendroica petechia). Proc. R. Soc. London B. 264(1381):527–532
- Andersson, M.; Eriksson, M.O.G. (1982). "Nest parasitism in goldeneyes Bucephala clangula: some evolutionary aspects". American Naturalist 120: 1–16. doi:10.1086/283965.
- Aragon, S.; Møller, A. P.; Soler, J. J.; Soler, M. (1999). "Molecular phylogeny of cuckoos supports a polyphyletic origin of brood parasitism". Journal of Evolutionary Biology 12: 495–506. doi:10.1046/j.1420-9101.1999.00052.x.
- Sorenson, M.D; Payne, R.B. (2001). "A single ancient origin of brood parasitism in African finches: implications for host-parasite coevolution". Evolution 55: 2550–2567. doi:10.1554/0014-3820(2001)055[2550:asaoob]2.0.co;2.
- Sorenson, M.D.; Payne, R.B. (2002). "Molecular genetic perspectives on avian brood parasitism". Integrative and Comparative Biology 42: 388–400. doi:10.1093/icb/42.2.388. PMID 21708732.
- Vogl, W.; Taborsky, M.; Taborsky, B.; Teuschl, Y.; Honza, M. (2002). "Cuckoo females preferentially use specific habitats when searching for hot nests". Animal Behavior 64: 843–850. doi:10.1006/anbe.2003.1967.
- Teuschl, Y; Taborsky, B; Taborsky, M (1998). "How do cuckoos find their hosts? The role of habitat imprinting". Animal Behavior 56: 1425–1433. doi:10.1006/anbe.1998.0931.
- Brian Peer, Scott Robinson, and James Herkert in The Auk 117(4):892–901
- Birkhead, T. R.; Hemmings, N.; Spottiswoode, C. N.; Mikulica, O.; Moskát, C.; Ban, M.; Schulze-Hagen, K. (2011). "Internal incubation and early hatching in brood parasitic birds". Proceedings of the Royal Society Series B 278: 1019–1024. doi:10.1098/rspb.2010.1504.
- Soler, M.; Soler, J. J.; Martinez, J. G.; Moller, A. P. (1995). "Magpie host manipulation by great spotted cuckoos: Evidence for an avian mafia?". Evolution 49: 770–775. doi:10.2307/2410329.
- Hoover, J.P.; Robinson, S.K. (2007). "Retaliatory mafia behavior by a parasitic cowbird favors host acceptance of parasitic eggs". Proceedings of the National Academy of Sciences of the United States of America 104: 4479–4483. doi:10.1073/pnas.0609710104.
- Moksnes, A; Roskaft, E (1995). "Egg-morphs and host preference in the common cuckoo (Cuculus canorus): an analysis of cuckoo and host eggs form European museums and collections". J. Zool 236: 625–648. doi:10.1111/j.1469-7998.1995.tb02736.x.
- Vogl, W; Taborsky, M; Taborsky, B; Teuschl, Y; Honza, M (2002). "Cuckoo females preferentially use specific habitats when searching for hot nests". Animal Behavior 64: 843–850. doi:10.1006/anbe.2003.1967.
- Roldán, M.; Soler, M. (2011). "Parental-care parasitism: How do unrelated offspring attain acceptance bv foster parents?". Behavioral Ecology 22 (4): 679–691. doi:10.1093/beheco/arr041.
- Lyon, Bruce E (2003). "Egg recognition and counting reduce costs of avian conspecific brood parasitism". Nature 422: 495–499. doi:10.1038/nature01505.
- Lyon, B. E. (1993). "Conspecific brood parasitism as a flexible female reproductive tactic in American coots". Animal Behaviour 46 (5): 911–928. doi:10.1006/anbe.1993.1273.
- Hoi, H.; Darolová, A.; Krištofík, J. (2010). "Conspecific brood parasitism and anti-parasite strategies in relation to breeding density in female bearded tits". Behaviour 147 (12): 1533–1549. doi:10.1163/000579510X511060.
- Robertson, G. J. (1998). "Egg adoption can explain joint egg-laying in common eiders". Behavioral Ecology and Sociobiology 43 (4-5): 289–296. doi:10.1007/s002650050493.
- Gloag, R.; Tuero, D. T.; Fiorini, V. D.; Reboreda, J. C.; Kacelnik, A. (2012). "The economics of nestmate killing in avian brood parasites: A provisions trade-off". Behavioral Ecology 23 (1): 132–140. doi:10.1093/beheco/arr166.
- Sato, Tetsu (4 September 1986). "A brood parasitic catfish of mouthbrooding cichlid fishes in Lake Tanganyika". Nature 323: 58–59. doi:10.1038/323058a0. PMID 3748180.
- Baba, Reiko; Nagata, Yoshikazu; Yamagishi, Satoshi (October 1990). "Brood parasitism and egg robbing among three freshwater fish". Animal Behaviour 40: 776–778. doi:10.1016/s0003-3472(05)80707-9.
- Pawelek, Jaime; Coville, Rollin. "Cuckoo Bees". UC Berkeley. Retrieved 24 February 2015.
- "Cuckoo Wasps". Western Australian Museum. Retrieved 24 February 2015.
- Kawakita, Atsushi; Sota, Teiji; Ito, Masao; Ascher, John S.; Tanaka, Hiroyuki; Kato, Makoto; Roubik, David W. (May 2004). "Phylogeny, historical biogeography, and character evolution in bumble bees (Bombus: Apidae) based on simultaneous analysis of three nuclear gene sequences". Molecular Phylogenetics and Evolution 31 (2): 799–804. doi:10.1016/j.ympev.2003.12.003.
- Dapporto L, Cervo R, Sledge MF, Turillazzi S (2004). "Rank integration in dominance hierarchies of host colonies by the paper wasp social parasite Polistes sulcifer (Hymenoptera, Vespidae)". Journal of Insect Physiology 50 :217–223
- Ortolani, I.; Cervo, R. (2009). "Coevolution of daily activity timing in a host-parasite system". Biological Journal of the Linnean Society 96 (2): 399–405. doi:10.1111/j.1095-8312.2008.01139.x.
- Akino, T; JJ Knapp; JA Thomas; GW Elmes (1999). "Chemical mimicry and host specificity in the butterfly Maculinea rebeli, a social parasite of Myrmica ant colonies". Proceedings of the Royal Society B 266 (1427): 1419–1426. doi:10.1098/rspb.1999.0796. Retrieved 28 September 2013.
|Wikimedia Commons has media related to Brood parasitism.| | https://en.wikipedia.org/wiki/Brood_parasite |
4.21875 | |This article does not cite any sources. (January 2009)|
In physics, a charge carrier is a particle free to move, carrying an electric charge, especially the particles that carry electric charges in electrical conductors. Examples are electrons, ions and holes. In a conducting medium, an electric field can exert force on these free particles, causing a net motion of the particles through the medium; this is what constitutes an electric current. In different conducting media, different particles serve to carry charge:
- In metals, the charge carriers are electrons. One or two of the valence electrons from each atom is able to move about freely within the crystal structure of the metal. The free electrons are referred to as conduction electrons, and the cloud of free electrons is called a Fermi gas.
- In electrolytes, such as salt water, the charge carriers are ions, atoms or molecules that have gained or lost electrons so they are electrically charged. Atoms that have gained electrons so they are negatively charged are called anions, atoms that have lost electrons so they are positively charged are called cations. Cations and anions of the dissociated liquid also serve as charge carriers in melted ionic solids (see e.g. the Hall–Héroult process for an example of electrolysis of a melted ionic solid.) Proton conductors are electrolytic conductors employing positive hydrogen ions as carriers.
- In a plasma, an electrically charged gas which is found in electric arcs through air, neon signs, and the sun and stars, the electrons and cations of ionized gas act as charge carriers.
- In a vacuum, free electrons can act as charge carriers. These are sometimes called cathode rays. In a vacuum tube, the mobile electron cloud is generated by a heated metal cathode, by a process called thermionic emission.
- In semiconductors (the material used to make electronic components like transistors and integrated circuits), in addition to electrons, the travelling vacancies in the valence-band electron population (called "holes"), act as mobile positive charges and are treated as charge carriers. Electrons and holes are the charge carriers in semiconductors.
It can be seen that in some conductors, such as ionic solutions and plasmas, there are both positive and negative charge carriers, so an electric current in them consists of the two polarities of carrier moving in opposite directions. In other conductors, such as metals, there are only charge carriers of one polarity, so an electric current in them just consists of charge carriers moving in one direction.
Charge carriers in semiconductors
There are two recognized types of charge carriers in semiconductors. One is electrons, which carry a negative electric charge. In addition, it is convenient to treat the traveling vacancies in the valence band electron population (holes) as the second type of charge carrier, which carry a positive charge equal in magnitude to that of an electron.
Carrier generation and recombination
When an electron meets with a hole, they recombine and these free carriers effectively vanish. The energy released can be either thermal, heating up the semiconductor (thermal recombination, one of the sources of waste heat in semiconductors), or released as photons (optical recombination, used in LEDs and semiconductor lasers). The recombination means an electron which has been excited from the valence band to the conduction band falls back to the empty state in the valence band, known as the holes. The holes are the empty state created in the valence band when an electron gets excited after getting some energy to overpass the energy gap.
Majority and minority carriers
The more abundant charge carriers are called majority carriers, which are primarily responsible for current transport in a piece of semiconductor. In n-type semiconductors they are electrons, while in p-type semiconductors they are holes. The less abundant charge carriers are called minority carriers; in n-type semiconductors they are holes, while in p-type semiconductors they are electrons.
In an intrinsic semiconductor, which does not contain any impurity, the concentrations of both types of carriers are ideally equal. If an intrinsic semiconductor is doped with a donor impurity then the majority carriers are electrons; if the semiconductor is doped with an acceptor impurity then the majority carriers are holes.
Minority carriers play an important role in bipolar transistors and solar cells. Their role in field-effect transistors (FETs) is a bit more complex: for example, a MOSFET has both p-type and n-type regions. The transistor action involves the majority carriers of the source and drain regions, but these carriers traverse the body of the opposite type, where they are minority carriers. However, the traversing carriers hugely outnumber their opposite type in the transfer region (in fact, the opposite type carriers are removed by an applied electric field that creates an inversion layer), so conventionally the source and drain designation for the carriers is adopted, and FETs are called "majority carrier" devices.
Free carrier concentration
Free carrier concentration is the concentration of free carriers in a doped semiconductor. It is similar to the carrier concentration in a metal and for the purposes of calculating currents or drift velocities can be used in the same way. Free carriers are electrons (or holes) which have been introduced directly into the conduction band (or valence band) by doping and are not promoted thermally. For this reason electrons (holes) will not act as double carriers by leaving behind holes (electrons) in the other band. | https://en.wikipedia.org/wiki/Charge_carrier |
4 | The Investiture Controversy or Investiture Contest was the most significant conflict between Church and state in medieval Europe. In the 11th and 12th centuries, a series of popes challenged the authority of European monarchies. The issue was whether the pope or the monarch would name (invest) powerful local church officials such as bishops of cities and abbots of monasteries. The conflict ended in 1122, when Emperor Henry V and Pope Calixtus II agreed on the Concordat of Worms. It differentiated between the royal and spiritual powers and gave the emperors a limited role in selecting bishops. The outcome seemed mostly a victory for the pope and his claim that he was God's chief representative in the world. However, the Emperor did retain considerable power over the Church.
The investiture controversy began as a power struggle between Pope Gregory VII (1072–85) and Henry IV, Holy Roman Emperor (1056–1106). A brief but significant struggle over investiture also occurred between Henry I of England and Pope Paschal II in the years 1103 to 1107, and the issue played a minor role in the struggles between church and state in France, as well.
By undercutting the Imperial power established by the Salian emperors, the controversy led to nearly 50 years of civil war in Germany, and the triumph of the great dukes and abbots. Imperial power was finally re-established under the Hohenstaufen dynasty. Historian Norman Cantor:
The age of the investiture controversy may rightly be regarded as the turning-point in medieval civilization. It was the fulfillment of the early Middle Ages because in it the acceptance of the Christian religion by the Germanic peoples reached its final and decisive stage… The greater part of the religious and political system of the high Middle Ages emerged out of the events and ideas of the investiture controversy.
After the decline of the Roman Empire, and prior to the Investiture Controversy, while theoretically a task of the church, investiture was in practice performed by members of the religious nobility. Many bishops and abbots were themselves usually part of the ruling nobility. Since the eldest son would inherit the title, younger siblings often found careers in the church. This was particularly true where the family may have established a proprietary church or abbey on their estate. Since Otto the Great (936-72) the bishops had been princes of the empire, had secured many privileges, and had become to a great extent feudal lords over great districts of the imperial territory. The control of these great units of economic and military power was for the king a question of primary importance, affecting as it did imperial authority. It was essential for a ruler or nobleman to appoint (or sell the office to) someone who would remain loyal.
Since a substantial amount of wealth and land was usually associated with the office of a bishop or abbot, the sale of church offices (a practice known as simony) was an important source of income for leaders among the nobility, who themselves owned the land and by charity allowed the building of churches.
The crisis began when a group within the church, members of the Gregorian Reform, decided to rebel against the rule of simony by forcefully taking the power of investiture from the ruling secular power, i.e. the Holy Roman Emperor and placing that power wholly within control of the church. The Gregorian reformers knew this would not be possible so long as the emperor maintained the ability to appoint the pope, so their first step was to forcibly gain the papacy from the control of the emperor. An opportunity came in 1056 when Henry IV became German king at six years of age. The reformers seized the opportunity to take the papacy by force while he was still a child and could not react. In 1059, a church council in Rome declared, with In Nomine Domini, that leaders of the nobility would have no part in the selection of popes and created the College of Cardinals as a body of electors made up entirely of church officials. Once Rome regained control of the election of the pope, it was ready to attack the practice of investiture and simony on a broad front.
In 1075, Pope Gregory VII composed the Dictatus Papae. One clause asserted that the deposal of an emperor was under the sole power of the pope. It declared that the Roman church was founded by God alone – that the papal power (the auctoritas of Pope Gelasius) was the sole universal power; in particular, a council held in the Lateran Palace from 24 to 28 February the same year decreed that the pope alone could appoint or depose churchmen or move them from see to see. By this time, Henry IV was no longer a child, and he continued to appoint his own bishops. He reacted to this declaration by sending Gregory VII a letter in which he withdrew his imperial support of Gregory as pope in no uncertain terms: the letter was headed "Henry, king not through usurpation but through the holy ordination of God, to Hildebrand, at present not pope but false monk". It called for the election of a new pope. His letter ends, "I, Henry, king by the grace of God, with all of my Bishops, say to you, come down, come down!", and is often quoted with "and to be damned throughout the ages." which is a later addition.
The situation was made even more dire when Henry IV installed his chaplain, Tedald, a Milanese priest, as Bishop of Milan, when another priest of Milan, Atto, had already been chosen in Rome by the pope for candidacy. In 1076 Gregory responded by excommunicating Henry, and deposed him as German king, releasing all Christians from their oath of allegiance.
Enforcing these declarations was a different matter, but the advantage gradually came to be on the side of Gregory VII. German princes and the aristocracy were happy to hear of the king's deposition. They used religious reasons to continue the rebellion started at the First Battle of Langensalza in 1075, and for seizure of royal holdings. Aristocrats claimed local lordships over peasants and property, built forts, which had previously been outlawed, and built up localized fiefdoms to secure their autonomy from the empire.
Thus, because of these combining factors, Henry IV had no choice but to back down, needing time to marshal his forces to fight the rebellion. In 1077, he traveled to Canossa in northern Italy to meet the pope and apologize in person. As penance for his sins, and echoing his own punishment of the Saxons after the First Battle of Langensalza, he dramatically wore a hair shirt and stood in the snow barefoot in the middle of winter in what has become known as the Walk to Canossa. Gregory lifted the excommunication, but the German aristocrats, whose rebellion became known as the Great Saxon Revolt, were not so willing to give up their opportunity. They elected a rival king, Rudolf von Rheinfeld. Three years later, Gregory declared his support for von Rheinfeld, and excommunicated Henry IV again.
Henry IV then proclaimed Antipope Clement III to be pope. In 1080, Rudolf died, effectively ending the internal revolt against Henry. In 1081, Henry invaded Rome, for the first time, with the intent of forcibly removing Gregory VII and installing a more friendly pope. Gregory VII called on his allies, the Normans in southern Italy, and they rescued him from the Germans in 1085. The Normans sacked Rome in the process, and when the citizens of Rome rose up against Gregory, he was forced to flee south with the Normans. He died soon thereafter.
The Investiture Controversy continued for several decades as each succeeding pope tried to diminish imperial power by stirring up revolt in Germany. These revolts were gradually successful. Henry IV was succeeded upon his death in 1106 by his son Henry V, who had rebelled against his father in favor of the papacy, and who had made his father renounce the legality of his antipopes before he died. Nevertheless, Henry V chose one more antipope, Gregory VIII. Later, he renounced some of the rights of investiture with the Concordat of Worms, abandoned Gregory, and was received back into communion and recognized as legitimate emperor as a result.
English investiture controversy of 1102 to 1107
At the time of Henry IV's death, Henry I of England and the Gregorian papacy were also embroiled in a controversy over investiture, and its solution provided a model for the eventual solution of the issue in the empire.
William the Conqueror had accepted a papal banner and the distant blessing of Pope Alexander II upon his invasion, but had successfully rebuffed the pope's assertion after the successful outcome, that he should come to Rome and pay homage for his fief, under the general provisions of the "Donation of Constantine".
The ban on lay investiture in Dictatus Papae did not shake the loyalty of William's bishops and abbots. In the reign of Henry I, the heat of exchanges between Westminster and Rome induced Anselm, Archbishop of Canterbury, to give up mediating and retire to an abbey. Robert of Meulan, one of Henry's chief advisors, was excommunicated, but the threat of excommunicating the king remained unplayed. The papacy needed the support of English Henry while German Henry was still unbroken. A projected crusade also required English support.
Henry I commissioned the Archbishop of York to collect and present all the relevant traditions of anointed kingship. "The resulting 'Anonymous of York' treaties are a delight to students of early-medieval political theory, but they in no way typify the outlook of the Anglo-Norman monarchy, which had substituted the secure foundation of administrative and legal bureaucracy for outmoded religious ideology"
Concordat of London, 1107
According to René Metz, author of "What Is Canon Law?", a concordat is a convention concluded between the Holy See and the civil power of a country to define the relationship between the Catholic Church and the state in matters in which both are concerned. The concordat is one type of an international convention. Concordats began during the First Crusade's end in 1098.
The Concordat of London (1107) suggested a compromise that was later taken up in the Concordat of Worms. In England, as in Germany, the king's chancery started to distinguish between the secular and ecclesiastical powers of the prelates. Employing this distinction, Henry gave up his right to invest his bishops and abbots while reserving the custom of requiring them to swear homage for the "temporalities" (the landed properties tied to the episcopate) directly from his hand, after the bishop had sworn homage and feudal vassalage in the commendation ceremony (commendatio), like any secular vassal. The system of vassalage was not divided among great local lords in England as it was in France, since the king was in control by right of the conquest.
Concordat of Worms and its significance
On the European mainland, after 50 years of fighting, the Concordat of Worms provided a similar, but longer lasting, compromise when signed on September 23, 1122. It eliminated lay investiture, while leaving secular leaders some room for unofficial but significant influence in the appointment process.
While the monarchy was embroiled in the dispute with the Church, it declined in power and broke apart. Localized rights of lordship over peasants grew. This resulted in multiple effects: 1) increased serfdom that reduced human rights for the majority, 2) increased taxes and levies that royal coffers declined, and 3) localized rights of Justice where courts did not have to answer to royal authority. In the long term, the decline of imperial power would divide Germany until the 19th century. Similarly, in Italy, the investiture controversy weakened the emperor's authority and strengthened local separatist forces.
The papacy grew stronger from the controversy. Marshalling for public opinion engaged lay people in religious affairs increasing lay piety, setting the stage for the Crusades and the great religious vitality of the 12th century.
The dispute did not end with the Concordat of Worms. Future disputes between popes and Holy Roman Emperors continued until northern Italy was lost to the empire entirely. The church would Crusade against the Holy Roman Empire under Frederick II. According to Norman Cantor:
The investiture controversy had shattered the early-medieval equilibrium and ended the interpenetration of ecclesia and mundus. Medieval kingship, which had been largely the creation of ecclesiastical ideals and personnel, was forced to develop new institutions and sanctions. The result during the late eleventh and early twelfth centuries, was the first instance of a secular bureaucratic state whose essential components appeared in the Anglo-Norman monarchy."
- Rubenstein, Jay (2011), Armies of Heaven: The First Crusade and the Quest for Apocalypse, Basic Books, p. 18, ISBN 0-465-01929-3.
- Cantor, Norman F (1958), Church. Kingship, and Lay Investiture in England: 1089-1135, Princeton University Press, pp. 8–9.
- Blumenthal Investiture Controversy pp. 34–36
- Löffler, Klemens. "Conflict of Investitures." The Catholic Encyclopedia. Vol. 8. New York: Robert Appleton Company, 1910. 29 Jan. 2015
- Appleby, R. Scott. "How the pope got his political muscle." U.S. Catholic 64.9 (1999): 36. Academic Search Complete. EBSCO. Web. 5 June 2010.
- Paravicini Bagliani, Agostino. "Sia fatta la mia volontà". Medioevo (143): 76.
- Halsall, Paul. "Medieval Sourcebook: Henry IV: Letter to Gregory VII, 24 January 1076". Internet Medieval Source Book. 6/2/2010 <http://www.fordham.edu/halsall/source/henry4-to-g7a.html>.
- Horst, Fuhrmann. Germany in the High Middle Ages c.1050-1200. Press Syndicate of the University of Cambridge. p. 64. ISBN 0 521 31980-3.
- Shaff-Herzog. A Religious Encyclopedia: or Dictionary of Biblical, Historical, Doctrinal, and Practical Theology. II vols. New York, NY: Funk and Wagnalls Publishers, 1883. Pg. 911. 6/3/2010.
- Halsall, Paul."Medieval Sourcebook: Gregory VII: First Deposition and Banning of Henry IV (22 February 1076)". Internet Medieval Source Book. 6/2/2010 <>.
- Cantor, The Civilization of the Middle Ages, "The Entrenchment of Secular Leadership" p 286.
- Metz, René (1960). "What Is Canon Law? p.137". The Twentieth Century Encyclopedia of Catholicism, Section VIII: The Organization of the Church. 80. New York: Hawthorn Books Inc.
- H. Hearder, D. P. Waley, eds. A Short History of Italy: From Classical Times to the Present Day, 1963.
- N. Cantor, The Civilization of the Middle Ages, "The Entrenchment of Secular Leadership", p 395.
- Blumenthal, Uta-Renate (1988). The Investiture Controversy: Church and Monarchy from the Ninth to the Twelfth Century. University of Pennsylvania Press.
- Cantor, Norman F. Church. Kingship, and Lay Investiture in England: 1089-1135 (Princeton University Press, 1958)
- Cantor, Norman F. (1993). The Civilization of the Middle Ages. HarperCollins, PP 265–76, 284-88
- Cowdrey, H.E.J. (1998). Pope Gregory VII, 1073–1085. Oxford University Press.
- Jolly, Karen Louise. (1997). Tradition & Diversity: Christianity in a World Context to 1500. ME Sharpe.
- McCarthy, T. J. H. (2014). Chronicles of the Investiture Contest: Frutolf of Michelsberg and his continuators. Manchester: Manchester Medieval Sources. ISBN 9780719084706.
- Metz, René. (1960). What Is Canon Law? Hawthorn Books. New York.
- Morrison, Karl F., ed. The investiture controversy: issues, ideas, and results (Holt McDougal, 1971) excerpts from primary and secondary sources
- Tellenbach, Gerd (1993). The Western Church from the Tenth to the Early Twelfth Century. Cambridge University Press.
- Thompson, James Westfall, and Edgar Nathaniel Johnson. (1937) An introduction to medieval Europe, 300-1500 (1937) PP 380–90
- Slocum, Kenneth, ed. Sources in Medieval Culture and History (2010) pp 170–75
- "Investiture Controversy", from Encyclopædia Britannica Online.
- "Canonical Investiture", from the Catholic Encyclopedia]
- "Investiture", from the Columbia Encyclopedia.
- "The Owl, The Cat, And The Investiture Controversy", from the Online Reference Book for Medieval Studies (ORB).
- "Empire and Papacy", from the Internet Medieval Sourcebook.
- Henry IV: Letter to Gregory VII, Jan 24 1076.
- Gregory VII: First Deposition and Banning of Henry IV (Feb 22, 1076)
- Gregory VII: Second Banning and Dethronement of Henry IV (March 7, 1080)
- Gregory VII: Dictatus Papae 1090
- Ban on Lay Investitures, 1078
- The Concordat of Worms 1122
- The Canons of the First Lateran Council, 1123
- Avalon Project, Yale University: Documents relating to the War of the Investitures | https://en.wikipedia.org/wiki/Investiture_Controversy |
4.09375 | Labeled cross section of the nasal cavities
|Classification and external resources|
Rhinorrhea or rhinorrhoea is a condition where the nasal cavity is filled with a significant amount of mucus fluid. The condition, commonly known as a runny nose, occurs relatively frequently. Rhinorrhea is a common symptom of allergies or certain diseases, such as the common cold or hay fever. It can be a side effect of crying, exposure to cold temperatures, cocaine abuse or withdrawal, such as from opioids like methadone. Treatment for rhinorrhea is not usually necessary, but there are a number of medical treatments and preventive techniques available.
Signs and symptoms
Rhinorrhea is characterized by an excess amount of mucus produced by the mucous membranes that line the nasal cavities. The membranes create mucus faster than it can be processed, causing a backup of mucus in the nasal cavities. As the cavity fills up, it blocks off the air passageway, causing difficulty breathing through the nose. Air caught in nasal cavities, namely the sinus cavities, cannot be released and the resulting pressure may cause a headache or facial pain. If the sinus passage remains blocked, there is a chance that sinusitis may result. If the mucus backs up through the Eustachian tube, it may result in ear pain or an ear infection. Excess mucus accumulating in the throat or back of the nose may cause a post-nasal drip, resulting in a sore throat or coughing. Additional symptoms include sneezing, nosebleeds, and nasal discharge.
Rhinorrhea is especially common during winter months and certain low temperature seasons. Cold-induced rhinorrhea occurs due to a combination of thermodynamics and the body's natural reactions to cold weather stimuli. One of the purposes of nasal mucus is to warm inhaled air to body temperature as it enters the body. In order for this to happen, the nasal cavities must be constantly coated with liquid mucus. During cold, dry seasons, the mucus lining nasal passages tends to dry out, meaning that mucous membranes must work harder, producing more mucus to keep the cavity lined. As a result, the nasal cavity can fill up with mucus. At the same time, when air is exhaled, water vapor in breath condenses as the warm air meets the colder outside temperature near the nostrils. This causes an excess amount of water to build up inside nasal cavities. In these cases, the excess fluid usually spills out externally through the nostrils.
Rhinorrhea can be a symptom of other diseases, such as the common cold or influenza. During these infections, the nasal mucous membranes produce excess mucus, filling the nasal cavities. This is to prevent infection from spreading to the lungs and respiratory tract, where it could cause far worse damage. It has also been suggested that rhinorrhea is a result of viral evolution, and may be a response that is not useful to the host, but which has evolved by the virus to maximise its own infectivity. Rhinorrhea caused by these infections usually occur on circadian rhythms. Over the course of a viral infection, sinusitis (the inflammation of the nasal tissue) may occur, causing the mucous membranes to release more mucus. Acute sinusitis consists of the nasal passages swelling during a viral infection. Chronic sinusitis occurs when one or more nasal polyps appear. This can be caused by a deviated septum as well as a viral infection.
Rhinorrhea can also occur when individuals with allergies to certain substances, such as pollen, dust, latex, soy, shellfish, or animal dander, are exposed to these allergens. In people with sensitized immune systems, the inhalation of one of these substances triggers the production of the antibody immunoglobulin E (IgE), which binds to mast cells and basophils. IgE bound to mast cells are stimulated by pollen and dust, causing the release of inflammatory mediators such as histamine. In turn, this causes, among other things, inflammation and swelling of the tissue of the nasal cavities as well as increased mucus production. Particulate matter in polluted air and chemicals such as chlorine and detergents, which can normally be tolerated, can make the condition considerably worse.
Rhinorrhea is also associated with shedding tears, whether from emotional events or from eye irritation. When excess tears are produced, the liquid drains through the inner corner of the eyelids, through the nasolacrimal duct, and into the nasal cavities. As more tears are shed, more liquid flows into the nasal cavities. The buildup of fluid is usually resolved via mucus expulsion through the nostrils.
If caused by a head injury, rhinorrhea can be a much more serious condition. A basilar skull fracture can result in a rupture of the barrier between the sinonasal cavity and the anterior cranial fossae or the middle cranial fossae. This rupture can cause the nasal cavity to fill with cerebrospinal fluid. This condition, known as cerebrospinal fluid rhinorrhoea or CSF rhinorrhea, can lead to a number of serious complications and possibly death if not addressed properly.
Rhinorrhea can occur as a symptom of opioid withdrawal accompanied by lachrymation. Other causes include cystic fibrosis, whooping cough, nasal tumors, hormonal changes, and cluster headaches. Rhinorrhea can also be the side effect of several genetic disorders, such as primary ciliary dyskinesia.
In most cases treatment for rhinorrhea is not necessary since it will clear up on its own—especially if it is the symptom of an infection. For general cases, blowing one's nose can get rid of the mucus buildup. Though blowing may be a quick-fix solution, it would likely proliferate mucosal production in the sinuses, leading to frequent and higher mucus buildups in the nose. Alternatively, saline nasal sprays and vasoconstrictor nasal sprays may also be used, but may become counterproductive after several days of use, causing rhinitis medicamentosa.
In recurring cases, such as those due to allergies, there are medicinal treatments available. For cases caused by histamine buildup, several types of antihistamines can be obtained relatively cheaply from drugstores.
People who prefer to keep clear nasal passages, such as singers, who need a clear nasal passage to perform, may use a technique called "nasal irrigation" to prevent rhinorrhea. Nasal irrigation involves rinsing the nasal cavity regularly with salty water or store bought saline solutions.
- "Palatal necrosis due to cocaine abuse". US National Library of Medicine. Retrieved 2012-09-21.
- Eileen Trigoboff; Kneisl, Carol Ren; Wilson, Holly Skodol (2004). Contemporary psychiatric-mental health nursing. Upper Saddle River, N.J: Pearson/Prentice Hall. p. 274. ISBN 0-13-041582-0.
- "Rhinorrhea". Online Etymology Dictionary. Retrieved 2011-09-21.
- "Nasal discharge". Medline Plus. United States National Library of Medicine, National Institutes of Health. Retrieved 2007-11-01.
- "Rhinorrhea Overview". FreeMd. Retrieved 2011-09-21.
- "Why Does Cold Weather Cause Runny Noses?". NPR. Retrieved 2011-09-22.
- "Why Does My Nose Run?". Kids Health. Retrieved 2011-09-22.
- Smolensky MH, Reinberg A, Labrecque G (May 1995). "Twenty-four Hour Pattern in Symptom Intensity of Ciral and Allergic Rhinitis: Treatment Implications". The Journal of Allergy and Clinical Immunology 95 (5 Pt 2): 1084–96. doi:10.1016/s0091-6749(95)70212-1. PMID 7751526.
- "Rhinorrhea – Definition, Symptoms, Causes, Diagnosis and Treatment". Prime Health Channel. 2011-08-30. Retrieved 2011-09-24.
- Dipiro, J.T.; Talbert, R.L.; Yee, G.C. (2008). Pharmacotherapy: A Pathophysiologic Approach (7th ed.). New York, NY: The McGraw-Hill Companies, Inc. pp. 1565–1575. ISBN 978-0-07-147899-1.
- Welch; et al. (2011-07-22). "CSF Rhinorrhea". Medscape. Retrieved 2011-09-22.
- "Opioid Withdrawal Protocol" (PDF). Mental Health and Addiction Services. Retrieved 2011-09-24.
- Aubrey, Allison (2007-02-22). "Got a Runny Nose? Flush it Out!". NPR. Retrieved 2011-09-21.
- Runny Nose: A Guide for Parents at the Wayback Machine (archived February 25, 2012) from the Pennsylvania Medical Society
- Cold and flu advice (NHS Direct)
- How to Wipe Your Nose
- How to Wipe Your Nose on Your Hands | https://en.wikipedia.org/wiki/Rhinorrhea |
4.03125 | Drama Historical Context Teacher Resources
Find Drama Historical Context educational ideas and activities
Showing 1 - 20 of 115 resources
The Diary of Anne Frank
While designed to supplement a viewing of the PBS Masterpiece Classic The Diary of Anne Frank, this resource can also serve as an excellent informational text and activity source for your students on the historical context and timeline...
8th - 10th Social Studies & History CCSS: Adaptable
Historical Context: African-American Oral Tradition
For this African-American oral tradition worksheet, students read and learn about the vast and important history of the oral traditions that existed in the African-American culture. Students use this worksheet as a pre-reading text to...
10th - 11th English Language Arts
To Kill a Mockingbird: Culture and History
The second of 10 lessons in a unit study of To Kill a Mockingbird establishes the historical and cultural context of Harper Lee's novel. The class listens to second part of an audio that describes Lee's life experiences that parallel the...
9th - 12th English Language Arts CCSS: Adaptable
Investigating the Harlem Renaissance
The work of Langston Hughes opens the door to research into the origin and legacy of the Harlem Renaissance and how the literature of the period can be viewed as a commentary on race relations in America. In addition, groups are assigned...
11th - 12th English Language Arts
Activities for Teaching “The Road Not Taken” by Robert Frost
Use all of these exercises, assignments, and assessments or pick and choose your favorites for your study of "The Road Not Taken" by Robert Frost. In this resource you will find: an informational text to examine, vocabulary lists and...
7th - 9th English Language Arts
Comprehension and Discussion Activities for the Movie Rabbit-Proof Fence
Lead discussion and thoughtful analysis as pupils view Rabbit Proof Fence, a drama based on true story about three aboriginal girls who ran away from Western Australia to return to their Aboriginal families in 1931. Here you'll find...
6th - 8th Social Studies & History CCSS: Adaptable
Creating Scrolls Based on the Illustrated Tale of Genji
Now these are learning activities full of fun, art, and cultural exploration. Kids consider the art of storytelling through comic book images. They then look at the Tale of Genji as it was written in the 11th century. They discuss...
6th - 8th Visual & Performing Arts
Comedy Across the Curriculum
The New York Times Learning Network provides the resources that permit pupils to examine and then write and perform a fake news broadcast in the vein of “The Daily Show” or “Saturday Night Live” Weekend Update. The generated reports...
11th - 12th Visual & Performing Arts
Hamlet and the Elizabethan Revenge Ethic in Text and Film
Students research the social context of Elizabethan England for Shakespeare's "Hamlet". They identify cultural influences on the play focusing on the theme of revenge and then analyze and compare film interpretations of the play.
9th - 12th English Language Arts
English Literature: An Overview
Relate literary works and authors to the major themes of English literature from the Anglo-Saxon period through the 20th century. Working in groups, high schoolers will evaluate period philosophy, religion, and politics that influenced...
11th - 12th English Language Arts
Figures of Speech: A Midsummer Night's Dream
High school readers analyze figures of speech in Shakespeare's A Midsummer Night's Dream with support from a two-page worksheet. They respond to four multi-step questions regarding the use of metaphors, similes, hyperbole, and irony in...
8th - 12th English Language Arts
The Play's the Thing: The Drama of Cyrano de Bergerac
Students practice dramatic 'living' through various drama activities. In this drama lesson, students define drama, view examples of dramatic elements in Cyrano de Bergerac and Roxanne, define characterization within the dramas, study the...
8th English Language Arts
To Kill a Mockingbird: Historical and Cultural Context
As part of their study of the film adaptation of To Kill A Mockingbird, class members analyze how Robert Mulligan uses the film lens to depict the historical period and social issues presented in Harper Lee's novel. A superior resource...
7th - 10th Visual & Performing Arts CCSS: Adaptable
Common Core Teaching and Learning Strategies
New ReviewHere's a resource that deserves a place in your curriculum library, whether or not your school has adopted the Common Core. Designed for middle and high school language arts classes, the packet is packed with teaching tips, materials,...
6th - 12th English Language Arts CCSS: Designed
Teaching the Holocaust through Literature
Centered on the short story "The Tenth Man" by Polish Holocaust survivor Ida Fink, here is a solid one-day resource to support study of World War II or Nazi history, short stories, or to complement any ELA unit on The Diary of Anne Frank...
7th - 12th English Language Arts
The Trial of Hamlet
Hamlet, that is not a rat behind the curtain, it is Polonius, and now you’re on trial for his murder. Practice and develop close reading skills, discover how a trial works, and get the entire class involved in this trial. The class...
11th - 12th English Language Arts CCSS: Designed
"Et tu, Brute?" - The Characters, Conflict and Historical context of Shakespeare's Julius Caesar
Students analyze the Shakespearian play, "Julius Caesar" in this seven lesson unit. Through readings, hands-on projects, and the study of plot development, comparisons are made to the movie and the historical records available.
6th English Language Arts | http://www.lessonplanet.com/lesson-plans/drama-historical-context |
4.125 | Real versus nominal value (economics)
In economics, a nominal value is an economic value expressed in historical nominal monetary terms. By contrast, a real value is a value that has been adjusted from a nominal value to remove the effects of general price level changes over time and is thus measured in terms of the general price level in some reference year (the base year). For example, changes in the nominal value of some commodity bundle over time can happen because of a change in the quantities in the bundle or their associated prices, whereas changes in real values reflect only changes in quantities. The process of converting from nominal to real terms is known as inflation adjustment.
Real values are a measure of purchasing power net of any price changes over time. For example, nominal income is often restated as real income, thus removing that part of income changes that merely reflect inflation (a general increase in prices). Similarly, for aggregate measures of output, such as gross domestic product (GDP), the nominal amount reflects production quantities and prices in that time period, whereas the differences between real amounts in different time periods reflect only changes in quantities. A series of real values over time, such as for real GDP, measures quantities over time expressed in prices of one year, called the base year (or more generally the base period). Real values in different years then express values of the bundles as if prices had been constant for all the years, with any differences due to differences in underlying quantities.
The nominal/real value distinction can apply not only to time-series data, as above, but also to cross-section data varying by region. For example, the total sales value of a particular good produced in a particular region of a country is influenced by both the physical amount sold and the selling price, which may be different from that of the country as a whole; for purposes of comparing the economic activity of different regions, the nominal output of the good in that region can be adjusted into real terms by repricing the goods at national-average prices.
The nominal value of a commodity bundle in a given year depends on both quantities and then-current prices, namely, as a sum of prices times quantities for the different commodities in the bundle. In turn nominal values are related to real values by the following arithmetic definition:
- nominal value / real value = (P x Q) / Q = P.
Here P is a price index, and Q is a quantity index of real value. In the equation, P is constructed to equal 1.00 in the base year. Alternatively, P can be constructed to equal 100 in the base year:
- (nominal value / real value) x 100 = P.
The base year can be any year, and comparisons of quantities are valid provided all values are adjusted to their values in the same base year. After a number of years have passed in which government statistics have been reported in terms of a particular base year, a new base year for comparisons is typically adopted; for the next several years all new data as well as all pre-existing data will be reported in terms of the new base year.
The simple case of a bundle of commodities (goods) is one that has only one commodity. In that case, output or consumption may be measured either in terms of money value (nominal) or physical quantity (real). Let i designate that commodity and let:
- Pi = the unit price of i, say, $5
- Qi = the quantity of good i, say, 10 units.
The nominal value of the good would then be price times quantity:
- nominal value of good i = Pi x Qi = $5 x 10 = $50.
Given only the nominal value and price, derivation of a real value is immediate:
- real value of good i = (Pi x Qi)/Pi = Qi = 50/5 = 10.
The price "deflates" (divides) the nominal value to derive a real value, the quantity itself.
Similar for a series of years, say five, given only nominal values of the good and prices in each year t, a real value can be derived for each of the five years:
- real value in year t = (nominal value in year t) / (price relative to base year) = Qit.
The following example generalizes from an individual good to a bundle of goods across different years for which P, a price index comparing the general price level across years, is available. Consider a nominal value (say of an hourly wage rate) in each different year t. To derive a real-value series from a series of nominal values in different years, one divides the nominal wage rate in each year by Pt, the price index in that year. By definition then:
- real value in year t = (nominal value in year t) / Pt.
If for years 1 and 2 (say 20 years apart) the nominal wage and price level P of goods are respectively
then real wages using year 1 as the base year are respectively:
The real wage so constructed in each different year indexes the amount of commodities in that year that could be purchased, for comparison to other years. Thus, in the example the price level increased by 33 percent, but the real wage rate still increased by 20 percent, permitting a 20 percent increase in the quantity of commodities the nominal wage could purchase.
The above generalization to a commodity bundle from the previous sing-good illustration has practical use, because price indexes and the National Income and Product Accounts are constructed from such bundles of commodities and their respective prices.
A sum of nominal values for each of the different commodities in the bundle is also called a nominal value. A bundle of n different commodities with corresponding prices and quantities for each year t defines:
- nominal value of that bundle in year t = P1t x Q1t + . . . + Pnt x Qnt.
From the above:
- Pt = the value of a price index in year t.
The nominal value of the bundle over a series of years and corresponding Pt define:
- real value of the bundle in year t = Qt = (nominal value of the bundle in year t) / Pt.
Alternatively, multiplying both sides by Pt:
- nominal value of the bundle in year t = Pt x Qt.
So, every nominal value can be dichotomized into a price-level part and a real part. The real part Qt is an index of the quantities in the bundle.
Real values (such as real wages or real gross domestic product) can be derived by dividing the relevant nominal value (e.g., nominal wage rate or nominal GDP) by the appropriate price index. For consumers, a relevant bundle of goods is that used to compute the Consumer Price Index (CPI). So, for wage earners as consumers a relevant real wage is the nominal wage (after-tax) divided by the CPI. A relevant divisor of nominal GDP is the GDP price index.
Real values represent the purchasing power of nominal values in a given period, such as wages or total production. In particular, price indexes are typically calculated relative to some base year. If for example the base year is 1992, real values are expressed in constant 1992 dollars, referenced as 1992=100, since the published index is usually normalized to have the price index equal 100 in the base year. To use the price index as a divisor for converting a nominal value into a real value, as in the previous section, the published index is first divided by the base-year price-index value of 100. In the U.S. National Income and Product Accounts, nominal GDP is called GDP in current dollars (that is, in prices current for each designated year), and real GDP is called GDP in [base-year] dollars (that is, in dollars that can purchase the same quantity of commodities as in the base year). In effect the price index of 100 for the base year is a numéraire for price-index values in other years.
The terminology of classical economics used by Adam Smith used a unit of labour as the purchasing power unit, so monetary quantities were defined by the cost of an hours of labour required to produce or purchase a given quantity.
Since interest rates are measured as percentages rather than in terms of units of some currency, real interest rates are measured as the difference between nominal interest rates and the rate of inflation. The expected real interest rate as of the starting time of a loan is the nominal interest rate minus the inflation rate expected over the term of the loan. The realized (ex post) real interest rate is computed by subtracting the actual inflation rate that ends up prevailing during the life of the loan from the nominal interest rate, and reflects what actually happened during the life of the loan.
The relationship above is approximate only. The actual relationship is as follows:
- IRN is the nominal interest rate,
- IRR is the real interest rate, and
- I is the inflation rate.
- Aggregation problem
- Classical dichotomy
- Constant Item Purchasing Power Accounting
- Cost-of-living index
- Financial repression
- Index (economics)
- Inflation accounting
- Money illusion
- National accounts
- Neutrality of money
- Peppercorn (legal), a nominal fee paid to fulfill a contractual requirement
- Real interest rate
- Real prices and ideal prices
- W.E. Diewert, "index numbers," ( 2008)The New Palgrave Dictionary of Economics, 2nd ed. Abstract.
- R. O'Donnell (1987). "real and nominal quantities," The New Palgrave: A Dictionary of Economics, v. 4, pp. 97–98 (Adam Smith's early distinction vindicated)
- Amartya Sen (1979). "The Welfare Basis of Real Income Comparisons: A Survey," Journal of Economic Literature, 17(1), p p. 1-45.
- D. Usher (1987). "real income," The New Palgrave: A Dictionary of Economics, v. 4, pp. 104–05
- DataBasics: Deflating Nominal Values to Real Values from Federal Reserve Bank of Dallas
- CPI Inflation Calculator from U.S. Bureau of Labor Statistics | https://en.wikipedia.org/wiki/Real_versus_nominal_value_(economics) |
4.09375 | Communication Skills Teacher Resources
Find Communication Skills educational ideas and activities
Showing 1 - 20 of 15,512 resources
Effective Communication for Successful Careers
Having good written communication skills is a must in today's workplace. Foster these skills by engaging learners is a discussion on how good writing skills can improve communication in the workplace. Have them write a project proposal...
8th 21st Century Skills CCSS: Designed
Communication, Day 1: Non-Verbal Communication
Have your secondary special education class learn and practice effective communication skills. Both verbal and non-verbal communication is discussed and practiced. They communicate using body language, build listening skills, and discuss...
9th - 12th Health
101 Ways to Teach Children Social Skills
Increasing pressure to improve student achievement has made it easy to overlook the social skills they also need to develop. With this collection of worksheets and activities, you'll be able to improve children's communication, teamwork,...
K - 6th 21st Century Skills CCSS: Adaptable
Conversation Visual Prompts
Help learners understand the importance and proper placement of non-verbal communication using these visual prompts. This set includes graphics to remind students to listen (ear), keep personal space (ruler), use the right facial...
K - 12th Special Education & Programs CCSS: Adaptable
Shakespeare: Nonverbal and Verbal Communication
Define nonverbal communication and view "The Shakespeare Sessions" for examples of nonverbal communication. Groups read through the dialogue of a scene and assign appropriate gestures, movements, and mannerisms to events and characters....
7th - 12th English Language Arts
Lesson: Communication, What's Valued, and the Written Word
Upper graders compare their cell phones to a lacquer box from the Japanese Edo Period. They consider how each is a form of communication and how the very nature of each object communicates social norms, ideology, and beliefs. A really...
9th - 12th Visual & Performing Arts
Ag Communications - One to One Communication
Explore the many aspects of communication and conversation. There are definitions and examples given to identify, explain, and understand the terminology of non-verbal communication. Helping the class become aware of the skills involved...
7th - 12th Health
Verbal Versus Nonverbal Communication
Young scholars create a multimedia presentation. They will complete a verbal versus non-verbal communication chart to create a multimedia presentation which will include the different types of communication strategies. Then answer a...
6th - 9th English Language Arts
BBC Learning English, Listening Comprehension
In this specific listening comprehension worksheet, students listen to an audio file and then choose the best answer to 15 corresponding multiple choice questions. Students then respond to four questions about non-verbal communication in...
Higher Ed English Language Arts | http://www.lessonplanet.com/lesson-plans/communication-skills |
4.15625 | Asperger syndrome now comes under the single umbrella term of autism spectrum disorder (ASD). It is classified as a developmental disorder that affects how the brain processes information. People with Asperger syndrome have a wide range of strengths, weaknesses, skills and difficulties.
Common characteristics include difficulty in forming friendships, communication difficulties (such as a tendency to take things literally), and an inability to understand social rules and body language.
Although Asperger syndrome cannot be cured, appropriate intervention and experience can help people to develop skills, use strategies to compensate and help build up coping skills. Social skills training, which teaches people how to behave in different social situations, is often considered to be of great value to those with Asperger syndrome.
Counselling or psychological therapy, including cognitive behaviour therapy (CBT) can help people with Asperger syndrome to understand and manage their behavioural responses.
New ASD classification system
A new classification system for autism and Asperger syndrome, introduced in 2013 (in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders
), gives only one diagnosis of autism spectrum disorder. This is the result of much research that indicated there was not enough evidence to suggest that the conditions of autism and Asperger syndrome were distinct conditions, so now they all come under the single umbrella term, ASD.
This means that a diagnosis of Asperger syndrome will no longer be given. The preferred term is now ASD, However, there are a number of people who have been diagnosed with Asperger’s in the past, and identify with this diagnosis. They will still be able to refer to their condition as having Asperger’s into the future, despite the fact that it is no longer a formal diagnosis.
Symptoms of Asperger syndrome
More males than females are diagnosed with Asperger syndrome or ASD. While every person who has the condition will experience different symptoms and severity of symptoms, some of the more common characteristics include:
- average or above-average intelligence
- difficulties with high-level language skills such as verbal reasoning, problem solving, making inferences and predictions
- difficulties in empathising with others
- problems with understanding another person’s point of view
- difficulties engaging in social routines such as conversations and ‘small talk’
- problems with controlling feelings such as anger, depression and anxiety
- a preference for routines and schedules which can result in stress or anxiety if a routine is disrupted
- specialised fields of interest or hobbies.
Emotions of other people
A person with Asperger syndrome may have trouble understanding the emotions of other people, and the subtle messages sent by facial expression, eye contact and body language are often missed or misinterpreted. Because of this, people with Asperger syndrome might be mistakenly perceived as being egotistical, selfish or uncaring.
These are unfair labels because the person concerned may be unable to understand other people’s emotional states. People with Asperger syndrome are usually surprised when told their actions were hurtful or inappropriate.
Asperger syndrome and sexual codes of conduct
Research into the sexual understanding of people with Asperger syndrome is in its infancy. Studies suggest that individuals with Asperger syndrome are as interested in sex as anyone else, but many struggle with the wide range of complex skills required to successfully have intimate relationships.
People with Asperger syndrome can sometimes appear to have an ‘inappropriate’, ‘immature’ or ‘delayed’ understanding of sexual codes of conduct. They may not understand the boundaries of appropriate sexual behaviour and expression. This can sometimes result in sexually inappropriate behaviour. For example, an adult with Asperger syndrome might not understand the social rule that it is not considered socially appropriate to display sexualised behaviours in a public place.
Even people who are high achieving and academically or vocationally successful can have trouble negotiating the ‘hidden rules’ of courtship.
Issues for partners of people with Asperger syndrome or ASD
Some people with Asperger syndrome can successfully maintain relationships and parent children. However, like most relationships, there are challenges.
A common marital problem is unfair distribution of responsibilities. For example, the partner of a person with Asperger syndrome may be used to doing everything in the relationship when it is just the two of them. However, the partner may need practical and emotional support once children come along, something that the person with Asperger syndrome may not be fully able to provide.
When the partner expresses frustration or becomes upset that they are given no help of any kind, the person with Asperger syndrome is typically baffled. Tension in the relationship often makes their symptoms worse.
An adult’s diagnosis of Asperger syndrome often follows their child’s diagnosis of ASD. This ‘double whammy’ can be extremely distressing to the partner who has to cope simultaneously with both diagnoses. Counselling, or joining a support group where they can talk with other people who face the same challenges, can be helpful.
Some common issues for partners of people with Asperger syndrome include:
- feeling overly responsible for their partner
- failure to have their own needs met by the relationship
- lack of emotional support from family members and friends who do not fully understand or appreciate the extra strains placed on a relationship by Asperger syndrome
- a sense of isolation, because the challenges of their relationship are unique and not easily understood by others
- frustrations, since problems in the relationship do not seem to improve despite great efforts
- doubting the integrity of the relationship, or frequently wondering about whether or not to end the relationship
- difficulties in accepting that their partner will not ‘recover’ from Asperger syndrome
- after accepting that their partner’s Asperger syndrome cannot be ‘cured’, partners can often experience emotions such as guilt, despair and disappointment.
The workplace and Asperger syndrome
A person with Asperger syndrome may find their job opportunities are limited by their disability. It may help to choose a vocation that takes into account their symptoms and capitalises on their strengths, rather than highlighting their weaknesses.
Career suggestions for visual thinkers
The following career suggestions are adapted from material written by Temple Grandin, who has high-functioning autism and is a professor at Colorado University, USA. Suggestions include:
- computer programming
- commercial art
- equipment design
- appliance repair
- handcraft artisan
- webpage designer
- video game designer
- building maintenance
- building trades.
Career suggestions for those good at mathematics or music
- computer programming
- journalist, copy editor
- taxi driver
- piano (or other musical instrument) tuner
- filing positions
- bank teller
Where to get help
- Your doctor
- Aspergers Victoria Tel. (03) 9845 2766
- Amaze – Autism Victoria Tel. (03) 9657 1600
- Centre for Developmental Disability Health Victoria (CDDHV) Tel. (03) 9902 4467
Things to remember
- A person with Asperger syndrome often experiences difficulties when trying to understand the emotions of other people. Subtle messages that are sent by facial expression, eye contact and body language are often missed.
- Social skills training, which teaches people with Asperger syndrome how to behave in different social situations, is often considered to be of great value to people with this syndrome.
This page has been produced in consultation with and approved by:
Autism Victoria trading as amaze
Page content currently being reviewed.
Content on this website is provided for education and information purposes only. Information about a therapy, service, product or treatment does not imply endorsement and is not intended to replace advice from your doctor or other registered health professional. Content has been prepared for Victorian residents and wider Australian audiences, and was accurate at the time of publication. Readers should note that, over time, currency and completeness of the information may change. All users are urged to always seek advice from a registered health care professional for diagnosis and answers to their medical questions. | https://www.betterhealth.vic.gov.au/health/conditionsandtreatments/asperger-syndrome-and-adults |
4.15625 | Scientists today measure the Earth's surface temperature using thermometers at weather stations and on ships and buoys all over the world. Such thermometer records cover a large fraction of the globe going back to the mid-19th century, allowing scientists to determine a global average temperature trend for the last 160 years.
Before that time not many thermometer records are available, so scientists use indirect temperature measurements, supported by anecdotal evidence recorded by diarists, and the few thermometer records that do exist. Scientists must rely solely on indirect methods to look back further than recorded human history.
Indirect ways of assessing past temperatures, using so-called temperature proxies, take measurements of responses to past temperature change that are preserved in natural archives such as ice, rocks and fossils.
For example, ice sheets form as snow builds up, with each year's snowfall preserved as a single, visible layer. There are measurable chemical differences in snow formed at different temperatures, so ice cores provide a record of polar temperature going back around 250,000 years for Greenland and 800,000 years for Antarctica.
Yearly banding is also found in fossilised corals and lake sediment deposits, and each band has a specific chemistry that reflects the temperature when it formed. Growth rings in tree trunks can be wider or thinner depending on the climate at the time of growth, so fossilised trees can reveal the length of growing seasons. And fossilised or frozen pollen grains allow scientists to determine what plants were growing in the past, which can give us a good idea of the climate at the time.
Marine sediment cores provide temperature records spanning millions of years. They contain the fossilised shells of tiny marine creatures that preserve a chemical record of the sea temperature when they lived.
To make their temperature reconstructions as accurate as possible scientists have calibrated each proxy by testing how it changes in response to changing temperature. However, the further back in time we look, the more sparse the proxy temperature records become. Therefore the most reliable way to work out past temperatures is to combine different proxies – and to use data from many locations to screen out local temperature fluctuations.
• This article was written by Carbon Brief in conjunction with the Guardian and partners
The ultimate climate change FAQ
This editorial is free to reproduce under Creative Commons
This post by The Guardian is licensed under a Creative Commons Attribution-No Derivative Works 2.0 UK: England & Wales License.
Based on a work at theguardian.com | http://www.theguardian.com/environment/2012/mar/07/past-climate-temperature-proxies |
4.125 | The overconfidence effect is a well-established bias in which a person's subjective confidence in his or her judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high. For example, Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance, (2) overplacement of one's performance relative to others, and (3) the excessive certainty regarding the accuracy of one's beliefs − called overprecision.
The most common way in which overconfidence has been studied is by asking people how confident they are of specific beliefs they hold or answers they provide. The data show that confidence systematically exceeds accuracy, implying people are more sure that they are correct than they deserve to be. If human confidence had perfect calibration, judgments with 100% confidence would be correct 100% of the time, 90% confidence correct 90% of the time, and so on for the other levels of confidence. By contrast, the key finding is that confidence exceeds accuracy so long as the subject is answering hard questions about an unfamiliar topic. For example, in a spelling task, subjects were correct about 80% of the time, whereas they claimed to be 100% certain. Put another way, the error rate was 20% when subjects expected it to be 0%. In a series where subjects made true-or-false responses to general knowledge statements, they were overconfident at all levels. When they were 100% certain of their answer to a question, they were wrong 20% of the time.
- 1 Overconfidence distinctions
- 2 Practical implications
- 3 Individual differences
- 4 See also
- 5 References
- 6 Further reading
One manifestation of the overconfidence effect is the tendency to overestimate one's standing on a dimension of judgement or performance. This subsection of overconfidence focuses on the certainty one feels in their own ability, performance, level of control or chance of success. This phenomenon is most likely to occur on hard tasks, hard items, when failure is likely or when the individual making the estimate is not especially skilled. Overestimation has been seen to occur across domains other than those pertaining to one's own performance. This includes the illusion of control, planning fallacy.
Illusion of control
Illusion of control describes the tendency for people to behave as if they might have some control when in fact they have none. However, evidence does not support the notion that people systematically overestimate how much control they have; when they have a great deal of control, people tend to underestimate how much control they have.
The planning fallacy describes the tendency for people to overestimate their rate of work or to underestimate how long it will take them to get things done. It is strongest for long and complicated tasks, and disappears or reverses for simple tasks that are quick to complete.
Wishful-thinking effects, in which people overestimate the likelihood of an event because of its desirability, are relatively rare. This may be in part because people engage in more defensive pessimism in advance of important outcomes, in an attempt to reduce the disappointment that follows overly optimistic predictions.
Overprecision is the excessive confidence that one knows the truth. For reviews, see Harvey (1997) or Hoffrage (2004). Much of the evidence for overprecision comes from studies in which participants are asked about their confidence that individual items are correct. This paradigm, while useful, cannot distinguish overestimation from overprecision; they are one and the same in these item-confidence judgments. After making a series of item-confidence judgments, if people try to estimate the number of items they got right, they do not tend to systematically overestimate their scores. The average of their item-confidence judgments exceeds the count of items they claim to have gotten right. One possible explanation for this is that item-confidence judgments were inflated by overprecision, and that their judgments do not demonstrate systematic overestimation.
The strongest evidence of overprecision comes from studies in which participants are asked to indicate how precise their knowledge is by specifying a 90% confidence interval around estimates of specific quantities. If people were perfectly calibrated, their 90% confidence intervals would include the correct answer 90% of the time. In fact, hit rates are often as low as 50%, suggesting people have drawn their confidence intervals too narrowly, implying that they think their knowledge is more accurate than it actually is.
Overplacement is perhaps the most prominent manifestation of the overconfidence effect. Overplacement is a judgment of your performance compared to another. This subsection of overconfidence occurs when people believe themselves to be better than others, or "better-than-average". It is the act of placing yourself or rating yourself above others (superior to others). Overplacement more often occurs on simple tasks, ones we believe are easy to accomplish successfully. One explanation for this theory is its ability to self-enhance.
Perhaps the most celebrated better-than-average finding is Svenson’s (1981) finding that 93% of American drivers rate themselves as better than the median. The frequency with which school systems claim their students outperform national averages has been dubbed the “Lake Wobegon” effect, after Garrison Keillor’s apocryphal town in which “all the children are above average.” Overplacement has likewise been documented in a wide variety of other circumstances. Kruger (1999), however, showed that this effect is limited to “easy” tasks in which success is common or in which people feel competent. For difficult tasks, the effect reverses itself and people believe they are worse than others.
Some researchers have claimed that people think good things are more likely to happen to them than to others, whereas bad events were less likely to happen to them than to others. But others have pointed out that prior work tended to examine good outcomes that happened to be common (such as owning one’s own home) and bad outcomes that happened to be rare (such as being struck by lightning). Event frequency accounts for a proportion of prior findings of comparative optimism. People think common events (such as living past 70) are more likely to happen to them than to others, and rare events (such as living past 100) are less likely to happen to them than to others.
Taylor and Brown (1988) have argued that people cling to overly positive beliefs about themselves, illusions of control, and beliefs in false superiority, because it helps them cope and thrive. Although there is some evidence that optimistic beliefs are correlated with better life outcomes, most of the research documenting such links is vulnerable to the alternative explanation that their forecasts are accurate. The cancer patients who are most optimistic about their survival chances are optimistic because they have good reason to be.
Overconfidence has been called the most “pervasive and potentially catastrophic” of all the cognitive biases to which human beings fall victim. It has been blamed for lawsuits, strikes, wars, and stock market bubbles and crashes.
Strikes, lawsuits, and wars could arise from overplacement. If plaintiffs and defendants were prone to believe that they were more deserving, fair, and righteous than their legal opponents, that could help account for the persistence of inefficient enduring legal disputes. If corporations and unions were prone to believe that they were stronger and more justified than the other side, that could contribute to their willingness to endure labor strikes. If nations were prone to believe that their militaries were stronger than were those of other nations, that could explain their willingness to go to war.
Overprecision could have important implications for investing behavior and stock market trading. Because Bayesians cannot agree to disagree, classical finance theory has trouble explaining why, if stock market traders are fully rational Bayesians, there is so much trading in the stock market. Overprecision might be one answer. If market actors are too sure their estimates of an asset’s value is correct, they will be too willing to trade with others who have different information than they do.
Oskamp (1965) tested groups of clinical psychologists and psychology students on a multiple-choice task in which they drew conclusions from a case study. Along with their answers, subjects gave a confidence rating in the form of a percentage likelihood of being correct. This allowed confidence to be compared against accuracy. As the subjects were given more information about the case study, their confidence increased from 33% to 53%. However their accuracy did not significantly improve, staying under 30%. Hence this experiment demonstrated overconfidence which increased as the subjects had more information to base their judgment on.
Even if there is no general tendency toward overconfidence, social dynamics and adverse selection could conceivably promote it. For instance, those most likely to have the courage to start a new business are those who most overplace their abilities relative to those of other potential entrants. And if voters find confident leaders more credible, then contenders for leadership learn that they should express more confidence than their opponents in order to win election.
Overconfidence can be beneficial to individual self-esteem as well as giving an individual the will to succeed in their desired goal. Just believing in oneself may give one the will to take one's endeavours further than those who do not.
Very high levels of core self-evaluations, a stable personality trait composed of locus of control, neuroticism, self-efficacy, and self-esteem, may lead to the overconfidence effect. People who have high core self-evaluations will think positively of themselves and be confident in their own abilities, although extremely high levels of core self-evaluations may cause an individual to be more confident than is warranted.
- Pallier, Gerry; Wilkinson, Rebecca; Danthiir, Vanessa; Kleitman, Sabina; Knezevic, Goran; Stankov, Lazar; Roberts, Richard D. (2002). "The Role of Individual Differences in the Accuracy of Confidence Judgments". The Journal of General Psychology 129 (3): 257–299. doi:10.1080/00221300209602099.
- Moore, Don A.; Healy, Paul J. (2008). "The trouble with overconfidence.". Psychological Review 115 (2): 502–517. doi:10.1037/0033-295X.115.2.502.
- Adams, P. A.; Adams, J. K. (1960). "Confidence in the recognition and reproduction of words difficult to spell". The American journal of psychology 73 (4): 544–552. doi:10.2307/1419942. PMID 13681411.
- Lichtenstein, Sarah; Fischhoff, Baruch; Phillips, Lawrence D. (1982). "Calibration of probabilities: The state of the art to 1980". In Kahneman, Daniel; Slovic, Paul; Tversky, Amos. Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press. pp. 306–334. ISBN 978-0-521-28414-1.
- Langer, Ellen J. (1975). "The illusion of control". Journal of Personality and Social Psychology 32 (2): 311–328. doi:10.1037/0022-35220.127.116.111.
- Buehler, Roger; Griffin, Dale; Ross, Michael (1994). "Exploring the "planning fallacy": Why people underestimate their task completion times". Journal of Personality and Social Psychology 67 (3): 366–381. doi:10.1037/0022-3518.104.22.1686.
- Krizan, Zlatan; Windschitl, Paul D. (2007). "The influence of outcome desirability on optimism" (PDF). Psychological Bulletin 133 (1): 95–121. doi:10.1037/0033-2909.133.1.95. PMID 17201572.
- Norem, Julie K.; Cantor, Nancy (1986). "Defensive pessimism: Harnessing anxiety as motivation". Journal of Personality and Social Psychology 51 (6): 1208–1217. doi:10.1037/0022-3522.214.171.1248.
- McGraw, A. Peter; Mellers, Barbara A.; Ritov, Ilana (2004). "The affective costs of overconfidence" (PDF). Journal of Behavioral Decision Making 17 (4): 281–295. doi:10.1002/bdm.472.
- Harvey, Nigel (1997). "Confidence in judgment". Trends in Cognitive Sciences 1 (2): 78–82. doi:10.1016/S1364-6613(97)01014-0.
- Hoffrage, Ulrich (2004). "Overconfidence". In Pohl, Rüdiger. Cognitive Illusions: a handbook on fallacies and biases in thinking, judgement and memory. Psychology Press. ISBN 978-1-84169-351-4.
- Gigerenzer, Gerd (1993). "The bounded rationality of probabilistic mental models". In Manktelow, K. I.; Over, D. E. Rationality: Psychological and philosophical perspectives. London: Routledge. pp. 127–171. ISBN 9780415069557.
- Alpert, Marc; Raiffa, Howard (1982). "A progress report on the training of probability assessors". In Kahneman, Daniel; Slovic, Paul; Tversky, Amos. Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press. pp. 294–305. ISBN 978-0-521-28414-1.
- Svenson, Ola (1981). "Are we all less risky and more skillful than our fellow drivers?". Acta Psychologica 47 (2): 143–148. doi:10.1016/0001-6918(81)90005-6.
- Cannell, John Jacob (1989). "How public educators cheat on standardized achievement tests: The "Lake Wobegon" report". Friends for Education (Albuquerque, NM).
- Dunning, David (2005). Self-Insight: Roadblocks and Detours on the Path to Knowing Thyself. Psychology Press. ISBN 978-1841690742.
- Kruger, Justin (1999). "Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments". Journal of Personality and Social Psychology 77 (2): 221–232. doi:10.1037/0022-35126.96.36.199. PMID 10474208.
- Weinstein, Neil D. (1980). "Unrealistic optimism about future life events". Journal of Personality and Social Psychology 39 (5): 806–820. doi:10.1037/0022-35188.8.131.526.
- Chambers, John R.; Windschitl, Paul D. (2004). "Biases in Social Comparative Judgments: The Role of Nonmotivated Factors in Above-Average and Comparative-Optimism Effects". Psychological Bulletin 130 (5): 813–838. doi:10.1037/0033-2909.130.5.813.
- Chambers, John R.; Windschitl, Paul D.; Suls, Jerry (2003). "Egocentrism, Event Frequency, and Comparative Optimism: When what Happens Frequently is "More Likely to Happen to Me"". Personality and Social Psychology Bulletin 29 (11): 1343–1356. doi:10.1177/0146167203256870.
- Kruger, Justin; Burrus, Jeremy (2004). "Egocentrism and focalism in unrealistic optimism (and pessimism)". Journal of Experimental Social Psychology 40 (3): 332–340. doi:10.1016/j.jesp.2003.06.002.
- Taylor, Shelley E.; Brown, Jonathon D. (1988). "Illusion and well-being: A social psychological perspective on mental health". Psychological Bulletin 103 (2): 193–210. doi:10.1037/0033-2909.103.2.193. PMID 3283814.
- Kahneman, Daniel (19 October 2011). "Don't Blink! The Hazards of Confidence". New York Times. Adapted from: Kahneman, Daniel (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. ISBN 978-1-4299-6935-2.
- Plous, Scott (1993). The Psychology of Judgment and Decision Making. McGraw-Hill Education. ISBN 978-0-07-050477-6.
- Thompson, Leigh; Loewenstein, George (1992). "Egocentric interpretations of fairness and interpersonal conflict" (PDF). Organizational Behavior and Human Decision Processes 51 (2): 176–197. doi:10.1016/0749-5978(92)90010-5.
- Babcock, Linda C.; Olson, Craig A. (1992). "The Causes of Impasses in Labor Disputes". Industrial Relations 31 (2): 348–360. doi:10.1111/j.1468-232X.1992.tb00313.x.
- Johnson, Dominic D. P. (2004). Overconfidence and War: The Havoc and Glory of Positive Illusions. Harvard University Press. ISBN 978-0-674-01576-0.
- Aumann, Robert J. (1976). "Agreeing to Disagree". The Annals of Statistics 4 (6): 1236–1239. doi:10.1214/aos/1176343654.
- Daniel, Kent; Hirshleifer, David; Subrahmanyam, Avanidhar (1998). "Investor Psychology and Security Market Under- and Overreactions". The Journal of Finance 53 (6): 1839–1885. doi:10.1111/0022-1082.00077.
- Oskamp, Stuart (1965). "Overconfidence in case-study judgments" (PDF). Journal of Consulting Psychology 29 (3): 261–265. doi:10.1037/h0022125. Reprinted in Kahneman, Daniel; Slovic, Paul; Tversky, Amos, eds. (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press. pp. 287–293. ISBN 978-0-521-28414-1.
- Radzevick, J. R.; Moore, D. A. (2009). "Competing To Be Certain (But Wrong): Social Pressure and Overprecision in Judgment" (PDF). Academy of Management Proceedings 2009 (1): 1–6. doi:10.5465/AMBPP.2009.44246308.
- Fowler, James H.; Johnson, Dominic D. P. (2011-01-07). "On Overconfidence". Seed Magazine. ISSN 1499-0679.
- Judge, Timothy A.; Locke, Edwin A.; Durham, Cathy C. (1997). "The dispositional causes of job satisfaction: A core evaluations approach". Research in Organizational Behavior 19. pp. 151–188. ISBN 978-0762301799.
- Larrick, Richard P.; Burson, Katherine A.; Soll, Jack B. (2007). "Social comparison and confidence: When thinking you're better than average predicts overconfidence (and when it does not)". Organizational Behavior and Human Decision Processes 102 (1): 76–94. doi:10.1016/j.obhdp.2006.10.002.
- Baron, Johnathan (1994). Thinking and Deciding. Cambridge University Press. pp. 219–224. ISBN 0-521-43732-6.
- Gilovich, Thomas; Griffin, Dale; Kahneman, Daniel (2002). Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press. ISBN 978-0-521-79679-8.
- Sutherland, Stuart (2007). Irrationality. Pinter & Martin. pp. 172–178. ISBN 978-1-905177-07-3. | https://en.wikipedia.org/wiki/Overconfidence_effect |
4.34375 | Simile Teacher Resources
Find Simile educational ideas and activities
Showing 41 - 60 of 1,154 resources
Digital Life 102
Catch your pupils' attention by starting class with a quiz about digital media! After going over their answers with a partner, individuals compose similes about the role of digital media in their lives and share these with the class....
9th - 12th Social Studies & History CCSS: Designed
Where the Red Fern Grows Chapter 1 Worksheet
Break down Where the Red Fern Grows by Wilson Rawls into manageable chunks by focusing on plot points and literary elements in specific chapters. This resource is all about the first chapter, and asks pupils to use complete sentences to...
4th - 7th English Language Arts CCSS: Adaptable
Pictures in Words: Poems of Tennyson and Noyes
Young scholars examine how Tennyson and Noyes use words to paint vivid pictures. They read and analyze two poems, complete an online scavenger hunt, complete a worksheet, and write examples of alliteration, personification, metaphor,...
6th - 8th English Language Arts
Simile and Metaphor- Poetry Toolbox
Illustrate the connection between using figurative language (specifically similes and metaphors) and creating poetry. First this worksheet reviews the definition for each, and then writers create the endings to two examples provided....
4th - 6th English Language Arts
Play a figurative language game! Starting with a review of terms, this presentation quickly launches into a quiz game with hyper-linked answers. Simply click an answer to find out if it's wrong or right. The option to try again is always...
7th - 9th English Language Arts CCSS: Adaptable
Setting the Tone with Figurative Language
Explore figurative language with your secondary class. Extending a language arts unit, the lesson prompts middle schoolers to examine how an author's word choice establishes a story's tone, possibly using metaphors, similes,...
8th English Language Arts CCSS: Adaptable
Like Water for Chocolate Guided Reading Worksheet
Planning a novel unit on Like Water for Chocolate by Laura Esquivel? If so, this comprehensive guided reading worksheet might appeal to you. Steer readers through the entire novel with a routine of comprehension questions and analysis...
10th English Language Arts
Here's an Instant Activity for September 19, 2005
Young writers study similes and then complete a writing activity for similes. They complete a teacher-led activity for similes and then work independently to write sentences using the given similes. A solid lesson plan!
1st - 2nd English Language Arts
Narrative Practice: Similes, Metaphors, Personification
Transform boring sentences with figurative language. Class members employ metaphor, simile, and personification to rewrite a series of provided sentences. Pupils can prepare for narrative essays and creative assignments by completing the...
7th - 12th English Language Arts
Deciphering the Mechanics of Poetry
After a review of poetic terms, groups are given an object and they create a poem using a simile, a metaphor, internal rhyme, end rhyme, alliteration, and personification. Groups then exchange objects and repeat the process. Consider...
7th - 9th English Language Arts | http://www.lessonplanet.com/lesson-plans/simile/3 |
4.125 | Pneumonia may be suspected when the doctor examines the patient and hears coarse breathing or crackling sounds when listening to a portion of the chest with a stethoscope. There may be wheezing or the sounds of breathing may be faint in a particular area of the chest. A chest X-ray is usually ordered to confirm the diagnosis of pneumonia. The lungs have several segments referred to as lobes, usually two on the left and three on the right. When the pneumonia affects one of these lobes, it is often referred to as lobar pneumonia. Some pneumonias have a more patchy distribution that does not involve specific lobes. In the past, when both lungs were involved in the infection, the term "double pneumonia" was used. This term is rarely used today.
Sputum samples can be collected and examined under the microscope. Pneumonia caused by bacteria or fungi can be detected by this examination. A sample of the sputum can be grown in special incubators, and the offending organism can be subsequently identified. It is important to understand that the sputum specimen must contain little saliva from the mouth and be delivered to the laboratory fairly quickly. Otherwise, overgrowth of noninfecting bacteria from the mouth may predominate. As we have used antibiotics in a broader uncontrolled fashion, more organisms are becoming resistant to the commonly used antibiotics. These types of cultures can help in directing more appropriate therapy.
A blood test that measures white blood cell count (WBC) may be performed. An individual's white blood cell count can often give a hint as to the severity of the pneumonia and whether it is caused by bacteria or a virus. An increased number of neutrophils, one type of WBC, is seen in most bacterial infections, whereas an increase in lymphocytes, another type of WBC, is seen in viral infections, fungal infections, and some bacterial infections (like tuberculosis).
Bronchoscopy is a procedure in which a thin, flexible, lighted viewing tube is inserted into the nose or mouth after a local anesthetic is administered. Using this device, the doctor can directly examine the breathing passages (trachea and bronchi). Simultaneously, samples of sputum or tissue from the infected part of the lung can be obtained.
Sometimes, fluid collects in the pleural space around the lung as a result of the inflammation from pneumonia. This fluid is called a pleural effusion. If a significant amount of fluid develops, it can be removed. After numbing the skin with local anesthetic a needle is inserted into the chest cavity and fluid can be withdrawn and examined under the microscope. This procedure is called a thoracentesis. Often ultrasound is used to prevent complications from this procedure. In some cases, this fluid can become severely inflamed (parapneumonic effusion) or infected (empyema) and may need to be removed by more aggressive surgical procedures. Today, most often, this involves surgery through a tube or thoracoscope. This is referred to as video-assisted thoracoscopic surgery or VATS.
This answer should not be considered medical advice...This answer should not be considered medical advice and should not take the place of a doctor’s visit. Please see the bottom of the page for more information or visit our Terms and Conditions.
Archived: March 20, 2014
Thanks for your feedback.
361 of 402 found this helpful
Read the Original Article: Pneumonia | http://answers.webmd.com/answers/1173475/how-is-pneumonia-diagnosed |
4.03125 | History of Manchuria
Manchuria is a region in East Asia. Depending on the definition of its extent, Manchuria can either refer to a region falling entirely within China, or a larger region today divided between Northeast China and the Russian Far East. To differentiate between the two parts following the latter definition, the Russian part is also known as Outer Manchuria, while the Chinese part is known as Inner Manchuria. It is the homeland of the Manchu people, the designation introduced in 1636 for the Jurchen people, in origin a Tungusic people which took power in 17th century China, establishing the Qing dynasty that lasted until 1912. The population grew from about 1 million in 1750 to 5 million in 1850 and 14 million in 1900, largely because of the immigration of Chinese farmers.
Lying at the juncture of the Chinese, Japanese and Russian spheres of influence, Manchuria has been a cockpit of conflict since the late 19th century. The Russian Empire established control over the northern part of Manchuria in 1860 (Beijing Treaty); it built a railway to consolidate its hold. Disputes over Manchuria and Korea led to the Russo-Japanese War of 1904 to 1905. The Japanese invaded Manchuria in 1931, setting up the puppet state of Manchukuo which became a centerpiece of the fast-growing Japanese Empire. The Soviet invasion of Manchuria in 1945 led to the overnight collapse of Japanese rule. Manchuria was a base of operations for the Mao Zedong's People's Liberation Army in the Chinese Civil War, leading to the formation of the People's Republic of China. In the Korean War, Chinese forces used Manchuria as a base to assist North Korea against the UN forces. During the Cold War era, Manchuria became a matter of contention, escalating to the Sino–Soviet border conflict in 1969. The Sino-Russian border dispute was resolved diplomatically only in 2004. In recent years there has been extensive scholarship on Manchuria in the 20th century, while the earlier period is less studied.
Part of a series on the
|History of Manchuria|
- 1 Prehistory
- 2 Early history
- 3 History after 1860
- 4 Notes
- 5 References
- 6 Further reading
|This section needs additional citations for verification. (May 2011)|
At various times in antiquity, Han dynasty, Cao Wei dynasty, Western Jin dynasty, Tang dynasty and some other minor kingdoms of China had established control in parts of Manchuria. Various kingdoms of mixed proto-Korean and Tungusic ethnicity, such as Gojoseon, Buyeo, Goguryeo and Balhae were also established in parts of this area.
Manchuria was the homeland of several Tungusic tribes, including the Ulchs and Nani. Various ethnic groups and their respective kingdoms, including the Sushen, Donghu, Xianbei, Wuhuan, Mohe and Khitan have risen to power in Manchuria.
Finnish linguist Juha Janhunen believes that it was likely that a "Tungusic-speaking elite" ruled Goguryeo and Balhae, describing them as "protohistorical Manchurian states" and that part of their population was Tungusic, and that the area of southern Manchuria was the origin of Tungusic peoples and inhabited continuously by them since ancient times, and Janhunen rejected opposing theories of Goguryeo and Balhae's ethnic composition.
From 698 to 926, the kingdom of Balhae occupied northern Korean peninsula and parts of Manchuria and Primorsky Krai, consisting of the people of the recently fallen Goguryeo kingdom of Korea as an aristocratic class, and the Nanai, the Udege, and the Evenks and descendants of the Tungus-speaking people as a lower class. Balhae was an early feudal medieval state of Eastern Asia, which developed its industry, agriculture, animal husbandry, and had its own cultural traditions and art. People of Balhae maintained political, economic and cultural contacts with the southern Chinese Tang dynasty, as well as Japan.
Primorsky Krai settled at this moment by Northern Mohe tribes were incorporated to Balhae Kingdom under King Seon's reign (818–830) and put Balhae territory at its height. After subduing the Yulou Mohe (Hangul: 우루말갈 Hanja/Hanzi: 虞婁靺鞨 pinyin: Yúlóu Mòhé) first and the Yuexi Mohe (Hangul: 월희말갈 Hanja/Hanzi: 越喜靺鞨 pinyin: Yuèxǐ Mòhé) thereafter, King Seon administrated their territories by creating four prefectures : Solbin Prefecture, Jeongli Prefecture, Anbyeon Prefecture and Anwon Prefecture.
Manchuria under the Liao and Jin
With the Song dynasty to the south, the Khitan people of Western Manchuria, who probably spoke a language related to the Mongolic languages, created the Liao dynasty in the region, which went on to control adjacent parts of Northern China as well.
In the early 12th century the Tungusic Jurchen people (the ancestors of the later Manchu people) originally lived in the forests in the eastern borderlands of the Liao Empire, and were Liao's tributaries, overthrew the Liao and formed the Jin dynasty (1115–1234). They went on to control parts of Northern China and Mongolia after a series of successful military campaigns. Most of the surviving Khitan either assimilated into the bulk of the Han Chinese and Jurchen population, or moved to Central Asia; however, it is thought that the Daur people, still living in northern Manchuria, are also descendants of the Khitans.
The first Jin capital, Shangjing, located on the Ashi River not far from modern Harbin, was originally not much more than the city of tents, but in 1124 the second Jin emperor Wuqimai starting a major construction project, having his Chinese chief architect, Lu Yanlun, build a new city at this site, emulating, on a smaller scale, the Northern Song capital Bianjing (Kaifeng). When Bianjing fell to Jin troops in 1127, thousands of captured Song aristocrats (including the two Song emperors), scholars, craftsmen and entertainers, along with the treasures of the Song capital, were all taken to Shangjing (the Upper Capital) by the winners. Although the Jurchen ruler Wanyan Liang, spurred on by his aspirations to become the ruler of all China, moved the Jin capital from Shangjing to Yanjing (now Beijing) in 1153, and had the Shangjing palaces destroyed in 1157, the city regained a degree of significance under Wanyan Liang's successor, Emperor Shizong, who enjoyed visiting the region to get in touch with his Jurchen roots.
The capital of the Jin, Zhongdu, was captured by the Mongols in 1215 at the Battle of Zhongdu. The Jin moved their capital Kaifeng, which fell to Mongols in 1233. In 1234, the Jin dynasty collapsed after the siege of Caizhou. The last emperor of the Jin, Emperor Modi, was killed while fighting the Mongols who had breached the walls of the city. Days earlier, his predecessor, Emperor Aizong, committed suicide because he was unable to escape the besieged city.
Manchuria under the Mongols and the Yuan dynasty
In 1211, after the conquest of Western Xia, Genghis Khan mobilized an army to conquer the Jin dynasty. His general Jebe and brother Qasar were ordered to reduce the Jurchen cities in Manchuria.[unreliable source] They successfully destroyed the Jin forts there. The Khitans under Yelü Liuge declared their allegiance to Genghis Khan and established nominally autonomous state in Manchuria in 1213. However, the Jin forces dispatched a punitive expedition against them. Jebe went there again and the Mongols pushed out the Jins.
The Jin general, Puxian Wannu, rebelled against the Jin dynasty and founded the kingdom of Eastern Xia in Dongjing (Liaoyang) in 1215. He assumed the title Tianwang (天王; lit. Heavenly King) and the era name Tiantai (天泰). Puxian Wannu allied with the Mongols in order to secure his position. However, he revolted in 1222 after that and fled to an island while the Mongol army invaded Liaoxi, Liaodong, and Khorazm. As a result of an internal strife among the Khitans, they failed to accept Yelü Liuge's rule and revolted against the Mongol Empire. Fearing of the Mongol pressure, those Khitans fled to Goryeo without permission. But they were defeated by the Mongol-Korean alliance. Genghis Khan (1206–1227) gave his brothers and Muqali Chinese districts in Manchuria.
Ögedei Khan's son Güyük crushed the Eastern Xia dynasty in 1233, pacifying southern Manchuria. Some time after 1234 Ögedei also subdued the Water Tatars in northern part of the region and began to receive falcons, harems and furs as taxation. The Mongols suppressed the Water Tatar rebellion in 1237. In Manchuria and Siberia, the Mongols used dogsled relays for their yam. The capital city Karakorum directly controlled Manchuria until the 1260s.
During the Yuan dynasty (1271–1368), established by Kublai Khan by renaming his empire to "Great Yuan" in 1271, Manchuria was administered under the Liaoyang province. Descendants of Genghis Khan's brothers such as Belgutei and Hasar ruled the area under the Great Khans. The Mongols eagerly adopted new artillery and technologies. The world's earliest known cannon, dated 1282, was found in Mongol-held Manchuria.
After the expulsion of the Mongols from China, the Jurchen clans remained loyal to Toghan Temür, the last Yuan emperor. In 1375, Naghachu, a Mongol commander of the Mongolia-based Northern Yuan dynasty in Liaoyang province invaded Liaodong with aims of restoring the Mongols to power. Although he continued to hold southern Manchuria, Naghachu finally surrendered to the Ming dynasty in 1387. In order to protect the northern border areas the Ming decided to "pacify" the Jurchens in order to deal with its problems with Yuan remnants along its northern border. The Ming solidified control only under Yongle Emperor (1402–1424).
Manchuria during the Ming dynasty
The Ming dynasty took control of Liaoning in 1371, just three years after the expulsion of the Mongols from Beijing. During the reign of the Yongle Emperor in the early 15th century, efforts were made to expand Chinese control throughout entire Manchuria by establishing the Nurgan Regional Military Commission. Mighty river fleets were built in Jilin City, and sailed several times between 1409 and ca. 1432, commanded by the eunuch Yishiha down the Songhua and the Amur all the way to the mouth of the Amur, getting the chieftains of the local tribes to swear allegiance to the Ming rulers.
Soon after the death of the Yongle Emperor the expansion policy of the Ming was replaced with that of retrenchment in southern Manchuria (Liaodong). Around 1442, a defence wall was constructed to defend the northwestern frontier of Liaodong from a possible threat from the Jurched-Mongol Oriyanghan. In 1467–68 the wall was expanded to protect the region from the northeast as well, against attacks from Jianzhou Jurchens. Although similar in purpose to the Great Wall of China, this "Liaodong Wall" was of a simpler design. While stones and tiles were used in some parts, most of the wall was in fact simply an earthen dike with moats on both sides.
Chinese cultural and religious influence such as Chinese New Year, the "Chinese god", Chinese motifs like the dragon, spirals, scrolls, and material goods like agriculture, husbandry, heating, iron cooking pots, silk, and cotton spread among the Amur natives like the Udeghes, Ulchis, and Nanais.
Starting in the 1580s, a Jianzhou Jurchens chieftain Nurhaci (1558–1626), originally based in the Hurha River valley northeast of the Ming Liaodong Wall, started to unify Jurchen tribes of the region. Over the next several decades, the Jurchen (later to be called Manchu), took control over most of Manchuria, the cities of the Ming Liaodong falling to the Jurchen one after another. In 1616, Nurhaci declared himself a khan, and founded the Later Jin dynasty (which his successors renamed in 1636 to Qing dynasty).
Manchuria during the Qing dynasty
The process of unification of the Jurchen people completed by Nurhaci was followed by his son's, Hong Taiji, energetic expansion into Outer Manchuria. The conquest of the Amur basin people was completed after the defeat of the Evenk chief Bombogor, in 1640.
In 1644, the Manchus took Beijing, overthrowing the Ming dynasty and soon established the Qing dynasty rule (1644–1912) over all of China. The Manchus ruled all of China, but they treated their homeland of Manchuria to a special status and ruled it separately. The "Banner" system that in China involved military units originated in Manchuria and was used as a form of government.
During the Qing dynasty, the area of Manchuria was known as the "three eastern provinces" (東三省, dong san sheng) since 1683 when Jilin and Heilongjiang were separated even though it was not until 1907 that they were turned into actual provinces. The area of Manchuria was then converted into three provinces by the late Qing government in 1907.
For decades the Manchu rulers tried to prevent large-scale immigration of Han Chinese, but they failed and the southern parts developed agricultural and social patterns similar to those of North China. Manchuria's population grew from about 1 million in 1750 to 5 million in 1850 and 14 million in 1900, largely because of the immigration of Chinese farmers. The Manchus became a small element in their homeland, although they retained political control until 1900.
The region was separated from China proper by the Inner Willow Palisade, a ditch and embankment planted with willows intended to restrict the movement of the Han Chinese into Manchuria during the Qing dynasty, as the area was off-limits to the Han until the Qing started colonizing the area with them later on in the dynasty's rule. This movement of the Han Chinese to Manchuria is called Chuang Guandong. The Manchu area was still separated from modern-day Inner Mongolia by the Outer Willow Palisade, which kept the Manchu and the Mongols separate.
However, the Qing rule saw a massive increase of Han Chinese settlement, both legal and illegal, in Manchuria. As Manchu landlords needed the Han peasants to rent their land and grow grain, most Han migrants were not evicted. During the 18th century, Han peasants farmed 500,000 hectares of privately owned land in Manchuria and 203,583 hectares of lands which were part of courier stations, noble estates, and banner lands, in garrisons and towns in Manchuria the Han Chinese made up 80% of the population. Han farmers were resettled from north China by the Qing to the area along the Liao River in order to restore the land to cultivation.
To the north, the boundary with Russian Siberia was fixed by the Treaty of Nerchinsk (1689) as running along the watershed of the Stanovoy Mountains. South of the Stanovoy Mountains, the basin of the Amur and its tributaries belonged to the Qing Empire. North of the Stanovoy Mountains, the Uda Valley and Siberia belonged to the Russian Empire. In 1858, a weakening Qing Empire was forced to cede Manchuria north of the Amur to Russia under the Treaty of Aigun; however, Qing subjects were allowed to continue to reside, under the Qing authority, in a small region on the now-Russian side of the river, known as the Sixty-Four Villages East of the River.
In 1860, at the Convention of Peking, the Russians managed to acquire a further large slice of Manchuria, east of the Ussuri River. As a result, Manchuria was divided into a Russian half known as "Outer Manchuria", and a remaining Chinese half known as "Inner Manchuria". In modern literature, "Manchuria" usually refers to Inner (Chinese) Manchuria. (cf. Inner and Outer Mongolia). As a result of the Treaties of Aigun and Peking, China lost access to the Sea of Japan. The Qing government began to actively encourage Han Chinese citizens to move into Manchuria since then.
The Manza War in 1868 was the first attempt by Russia to expel Chinese from territory it controlled. Hostilities broke out around Vladivostok when the Russians tried to shut off gold mining operations and expel Chinese workers there. The Chinese resisted a Russian attempt to take Askold Island and in response, 2 Russian military stations and 3 Russian towns were attacked by the Chinese, and the Russians failed to oust the Chinese. However, the Russians finally managed it from them in 1892
History after 1860
By the 19th century, Manchu rule had become increasingly sinicized and, along with other borderlands of the Qing Empire such as Mongolia and Tibet, came under the influence of Japan and the European powers as the Qing dynasty grew weaker and weaker.
Russian and Japanese encroachment
Inner Manchuria also came under strong Russian influence with the building of the Chinese Eastern Railway through Harbin to Vladivostok. Some poor Korean farmers moved there. In Chuang Guandong many Han farmers, mostly from Shandong peninsula moved there, attracted by cheap farmland that was ideal for growing soybeans.
During the Boxer Rebellion in 1899–1900, Russian soldiers killed ten-thousand Chinese (Manchu, Han Chinese and Daur people) living in Blagoveshchensk and Sixty-Four Villages East of the River. In revenge, the Chinese Honghuzi conducted guerilla warfare against the Russian occupation of Manchuria and sided with Japan against Russia during the Russo-Japanese War.
Japan replaced Russian influence in the southern half of Inner Manchuria as a result of the Russo-Japanese War in 1904–1905. Most of the southern branch of the Chinese Eastern Railway (the section from Changchun to Port Arthur (Japanese: Ryojun)) was transferred from Russia to Japan, and became the South Manchurian Railway. Jiandao (in the region bordering Korea), was handed over to Qing dynasty as a compensation for the South Manchurian Railway.
From 1911 to 1931 Manchuria was nominally part of the Republic of China. In practice it was controlled by Japan, which worked through local warlords.
Japanese influence extended into Outer Manchuria in the wake of the Russian Revolution of 1917, but Outer Manchuria came under Soviet control by 1925. Japan took advantage of the disorder following the Russian Revolution to occupy Outer Manchuria, but Soviet successes and American economic pressure forced Japanese withdrawal.
It was reported that among Banner people, both Manchu and Chinese (Hanjun) in Aihun, Heilongjiang in the 1920s, would seldom marry with Han civilians, but they (Manchu and Chinese Bannermen) would mostly intermarry with each other. Owen Lattimore reported that, during his January 1930 visit to Manchuria, he studied a community in Jilin (Kirin), where both Manchu and Chinese bannermen were settled at a town called Wulakai, and eventually the Chinese Bannermen there could not be differentiated from Manchus since they were effectively Manchufied. The Han civilian population was in the process of absorbing and mixing with them when Lattimore wrote his article.
Manchuria was (and still is) an important region for its rich mineral and coal reserves, and its soil is perfect for soy and barley production. For Japan, Manchuria became an essential source of raw materials.
1931 Japanese invasion and Manchukuo
Around the time of World War I, Zhang Zuolin, a former bandit (Honghuzi) established himself as a powerful warlord with influence over most of Manchuria. He was inclined to keep his Manchu army under his control and to keep Manchuria free of foreign influence. The Japanese tried and failed to assassinate him in 1916. They finally succeeded in June 1928.
Following the Mukden Incident in 1931 and the subsequent Japanese invasion of Manchuria, Inner Manchuria was proclaimed to be Manchukuo, a puppet state under the control of the Japanese army. The last Manchu emperor, Puyi, was then placed on the throne to lead a Japanese puppet government in the Wei Huang Gong, better known as "Puppet Emperor's Palace". Inner Manchuria was thus detached from China by Japan to create a buffer zone to defend Japan from Russia's Southing Strategy and, with Japanese investment and rich natural resources, became an industrial domination. Under Japanese control Manchuria was one of the most brutally run regions in the world, with a systematic campaign of terror and intimidation against the local Russian and Chinese populations including arrests, organised riots and other forms of subjugation. The Japanese also began a campaign of emigration to Manchukuo; the Japanese population there rose from 240,000 in 1931 to 837,000 in 1939 (the Japanese had a plan to bring in 5 million Japanese settlers into Manchukuo). Hundreds of Manchu farmers were evicted and their farms given to Japanese immigrant families. Manchukuo was used as a base to invade the rest of China in 1937-40.
At the end of the 1930s, Manchuria was a trouble spot with Japan, clashing twice with the Soviet Union. These clashes - at Lake Khasan in 1938 and at Khalkhin Gol one year later - resulted in many Japanese casualties. The Soviet Union won these two battles and a peace agreement was signed. However, the regional unrest endured.[clarification needed]
After World War II
After the atomic bombing of Hiroshima in August 1945, the Soviet Union invaded from Soviet Outer Manchuria as part of its declaration of war against Japan. From 1945 to 1948, Inner Manchuria was a base area for the Chinese People's Liberation Army in the Chinese Civil War. With the encouragement of the Soviet Union, Manchuria was used as a staging ground during the Chinese Civil War for the Communist Party of China, which emerged victorious in 1949.
During the Korean War of the 1950s, 300,000 soldiers of the Chinese People's Liberation Army crossed the Sino-Korean border from Manchuria to repulse UN forces led by the United States from North Korea.
In the 1960s, Manchuria's border with the Soviet Union became the site of the most serious tension between the Soviet Union and China. The treaties of 1858 and 1860, which ceded territory north of the Amur, were ambiguous as to which course of the river was the boundary. This ambiguity led to dispute over the political status of several islands. This led to armed conflict in 1969, called the Sino-Soviet border conflict.
With the end of the Cold War, this boundary issue was discussed through negotiations. In 2004, Russia agreed to transfer Yinlong Island and one half of Heixiazi Island to China, ending an enduring border dispute. Both islands are found at the confluence of the Amur and Ussuri Rivers, and were until then administered by Russia and claimed by China. The event was meant to foster feelings of reconciliation and cooperation between the two countries by their leaders, but it has also provoked different degrees of dissent on both sides. Russians, especially Cossack farmers of Khabarovsk, who would lose their ploughlands on the islands, were unhappy about the apparent loss of territory. Meanwhile, some Chinese have criticised the treaty as an official acknowledgement of the legitimacy of Russian rule over Outer Manchuria, which was ceded by the Qing dynasty to Imperial Russia under a series of Unequal Treaties, which included the Treaty of Aigun in 1858 and the Convention of Peking in 1860, in order to exchange exclusive usage of Russia's rich oil resources. The transfer was carried out on October 14, 2008.
- Janhunen (2006), p. 109.
- Li (2001).
- Tao (1976), pp. 28–32.
- Tao (1976), p. 44.
- Tao (1976), p. 78–79.
- Franke (1994), p. 254.
- Franke (1994), pp. 264–265.
- Shanley (2008), p. 144.
- Atwood (2004), pp. 341–342.
- Berger (2003), p. 25.
- Kamal (2003), p. 76.
- Atwood (2004), p. 354.
- Tsai (1996), pp. 129–130.
- Edmonds (1985), pp. 38–40.
- Forsyth (1994), p. 214.
- Shao (2011), pp. 25-67.
- Clausen & Thøgersen (1995), p. 7.
- Isett (2007), p. 33.
- Richards 2003, p. 141.
- Anderson (2000), p. 504.
- Lomanov (2005:89–90)
Probably the first clash between the Russians and Chinese occurred in 1868. It was called the Manza War, Manzovskaia voina. "Manzy" was the Russian name for the Chinese population in those years. In 1868, the local Russian government decided to close down goldfields near Vladivostok, in the Gulf of Peter the Great, where 1,000 Chinese were employed. The Chinese decided that they did not want to go back, and resisted. The first clash occurred when the Chinese were removed from Askold Island, in the Gulf of Peter the Great. They organized themselves and raided three Russian villages and two military posts. For the first time, this attempt to drive the Chinese out was unsuccessful.
- "俄军惨屠海兰泡华民五千余人(1900年)". News.163.com. Retrieved 2010-05-18.
- (2008-10-15 16:41:01) (2008-10-15). "江东六十四屯". Blog.sina.com.cn. Retrieved 2010-05-18.
- Riechers (2001).
- Rhoads (2011), p. 263.
- Lattimore (1933), p. 272.
- Behr (1987), p. 202.
- Behr (1987), p. 168.
- Duara (2006).
- Behr (1987), p. 204.
- Battlefield – Manchuria
- "Handover of Russian islands to China seen as effective diplomacy | Top Russian news and analysis online | 'RIA Novosti' newswire". En.rian.ru. 2008-10-14. Retrieved 2010-05-18.
- Atwood, Christopher Pratt (2004), Encyclopedia of Mongolia and the Mongol Empire, ISBN 0816046719
- Behr, Edward (1987), The Last Emperor, Bantam Books, ISBN 0553344749
- Berger, Patricia Ann (2003), Empire of Emptiness: Buddhist Art and Political Authority in Qing China, University of Hawaii Press, ISBN 0824825632
- Bisher, Jamie (2006), White Terror: Cossack Warlords of the Trans-Siberian, Routledge, ISBN 1135765952
- Clausen, Søren; Thøgersen, Stig (1995), The Making of a Chinese City: History and Historiography in Harbin, M.E. Sharpe, ISBN 1563244764
- Duara, Prasenjit (2006), "The New Imperialism and the Post-Colonial Developmental State: Manchukuo in comparative perspective", The Asia-Pacific Journal: Japan Focus (Japanfocus.org)
- Dvořák, Rudolf (1895), Chinas Religionen, Aschendorff
- Du Halde, Jean-Baptiste (1735), Description géographique, historique, chronologique, politique et physique de l'empire de la Chine et de la Tartarie chinoise IV, Paris: P.G. Lemercier
- Edmonds, Richard Louis (1985), Northern Frontiers of Qing China and Tokugawa Japan: A Comparative Study of Frontier Policy, University of Chicago, Department of Geography, ISBN 0-89065-118-3
- Elliot, Mark C. (2000), "The Limits of Tartary: Manchuria in Imperial and National Geographies", The Journal of Asian Studies 59 (3): 603–646, doi:10.2307/2658945
- Forsyth, James (1994), A History of the Peoples of Siberia: Russia's North Asian Colony 1581-1990, Cambridge University Press, ISBN 0521477719
- Franke, Herbert (1994), "The Chin Dynasty", in Twitchett, Denis C.; Herbert, Franke; Fairbank, John K., The Cambridge History of China, 6, Alien Regimes and Border States, 710–1368, Cambridge University Press, pp. 215–320, ISBN 978-0-521-24331-5
- Garcia, Chad D. (2012), Horsemen from the Edge of Empire: The Rise of the Jurchen Coalition (PDF), University of Washington Press
- Giles, Herbert A. (1912), China and the Manchus
- Hauer, Erich; Corff, Oliver (2007), Handwörterbuch der Mandschusprache (in German), Otto Harrassowitz Verlag, ISBN 3447055286
- Isett, Christopher Mills (2007), State, Peasant, and Merchant in Qing Manchuria, 1644-1862, Stanford University Press, ISBN 0804752710
- Janhunen, Juha (2006), "From Manchuria to Amdo Qinghai: On the ethnic implications of the Tuyuhun Migration", in Pozzi, Alessandra; Janhunen, Juha Antero; Weiers, Michael, Tumen Jalafun Jecen Akū: Manchu Studies in Honor of Giovanni Stary, Otto Harrassowitz Verlag, pp. 107–120, ISBN 344705378X
- Kamal, Niraj (2003), Arise, Asia!: Respond to White Peril, Wordsmiths, ISBN 8187412089
- Kang, Hyeokhweon (2013), "Big Heads and Buddhist Demons: The Korean Military Revolution and Northern Expeditions of 1654 and 1658" (PDF), Emory Endeavors, 4: Transnational Encounters in Asia
- Kim, Loretta (2013), "Saints for Shamans? Culture, Religion and Borderland Politics in Amuria from the Seventeenth to Nineteenth Centuries", Central Asiatic Journal 56: 169–202
- Lattimore, Owen (1933), "Wulakai Tales from Manchuria", The Journal of American Folklore 46 (181): 272–286, doi:10.2307/535718
- Li, Linhua (2001), DNA Match Solves Ancient Mystery, China.org.cn, retrieved 2010-05-18
- Lomanov, Alexander V. (2005), "On the periphery of the 'Clash of Civilizations?' Discourse and geopolitics in Russo-Chinese Relations", in Nyíri, Pál; Breidenbach, Joana, China Inside Out: Contemporary Chinese Nationalism and Transnationalism, Central European University Press, pp. 77–98, ISBN 963-7326-14-6
- McCormack, Gavan (1977), Chang Tso-lin in Northeast China, 1911-1928: China, Japan, and the Manchurian Idea, Stanford University Press, ISBN 0804709459
- Miyawaki-Okada, Junko (2006), "What 'Manchu' was in the beginning and when it grows into a place-name", in Pozzi, Alessandra; Janhunen, Juha Antero; Weiers, Michael, Tumen Jalafun Jecen Akū: Manchu Studies in Honor of Giovanni Stary, Otto Harrassowitz Verlag, pp. 159–170, ISBN 344705378X
- P'an, Chao-ying (1938), American Diplomacy Concerning Manchuria, The Catholic University of America
- Reardon-Anderson, James (2000), "Land Use and Society in Manchuria and Inner Mongolia during the Qing Dynasty", Environmental History 5 (4): 503–276, doi:10.2307/3985584
- Rhoads, Edward J.M. (2011), Manchus and Han: Ethnic Relations and Political Power in Late Qing and Early Republican China, 1861–1928, University of Washington Press, ISBN 0295804122
- Riechers, Maggie (2001), "Fleeing Revolution: How White Russians, Academics, and Others Found an Unlikely Path to Freedom", Humanities (NEH.gov) 22 (3), retrieved 2015-06-06
- Scharping, Thomas (1998), "Minorities, Majorities and National Expansion: The History and Politics of Population Development in Manchuria 1610-1993", Cologne China Studies Online – Working Papers on Chinese Politics, Economy and Society (Kölner China-Studien Online – Arbeitspapiere zu Politik, Wirtschaft und Gesellschaft Chinas) (Modern China Studies, Chair for Politics, Economy and Society of Modern China, at the University of Cologne)
- Sewell, Bill (2003), "Postwar Japan and Manchuria", in Edgington, David W., Japan at the Millennium: Joining Past and Future, University of British Columbia Press, ISBN 0774808993
- Shanley, Tom (2008), Dominion: Dawn of the Mongol Empire, ISBN 978-0-615-25929-1
- Shao, Dan (2011), Remote Homeland, Recovered Borderland: Manchus, Manchoukuo, and Manchuria, 1907–1985, University of Hawaii Press, ISBN 0824834453
- Smith, Norman (2012), Intoxicating Manchuria: Alcohol, Opium, and Culture in China's Northeast, University of British Columbia Press, ISBN 077482431X
- Stephan, John J. (1996), The Russian Far East: A History, Stanford University Press, ISBN 0804727015
- Tamanoi, Mariko Asano (2000), "Knowledge, Power, and Racial Classification: The "Japanese" in "Manchuria"", The Journal of Asian Studies 59 (2): 248–276, doi:10.2307/2658656
- Tao, Jing-shen (1976), The Jurchen in Twelfth Century China, University of Washington Press, ISBN 0-295-95514-7
- Tatsuo, Nakami (2007), "The Great Game Revisited", in Wolff, David; Marks, Steven G.; Menning, Bruce W.; Schimmelpenninck van der Oye, David; Steinberg, John W.; Shinji, Yokote, The Russo-Japanese War in Global Perspective II, Brill, pp. 513–529, ISBN 9004154167
- Tsai, Shih-shan Henry (1996), The Eunuchs in the Ming Dynasty, SUNY Press, ISBN 0-7914-2687-4
- Wu, Shuhui (1995), Die Eroberung von Qinghai unter Berücksichtigung von Tibet und Khams, 1717-1727: Anhand der Throneingaben des Grossfeldherrn Nian Gengyao (in German), Otto Harrassowitz Verlag, ISBN 3447037563
- Zhao, Gang (2006), "Reinventing China: Imperial Qing Ideology and the Rise of Modern Chinese National Identity in the Early Twentieth Century", Modern China 36 (3): 3–30, doi:10.1177/0097700405282349
- Allsen, Thomas (1994). "The rise of the Mongolian empire and Mongolian rule in north China". In Denis C. Twitchett; Herbert Franke; John King Fairbank. The Cambridge History of China: Volume 6, Alien Regimes and Border States, 710–1368. Cambridge University Press. pp. 321–413. ISBN 978-0-521-24331-5.
- Crossley, Pamela Kyle. The Manchus (2002) excerpt and text search; review
- Im, Kaye Soon. "The Development of the Eight Banner System and its Social Structure," Journal of Social Sciences & Humanities (1991), Issue 69, pp 59–93
- Lattimore, Owen. Manchuria: Cradle of Conflict (1932).
- Matsusaka, Yoshihisa Tak. The Making of Japanese Manchuria, 1904-1932 (Harvard East Asian Monographs, 2003)
- Mitter, Rana. The Manchurian Myth: Nationalism, Resistance, and Collaboration in Modern China (2000).
- Sun, Kungtu C. The economic development of Manchuria in the first half of the twentieth century (Havard U.P. 1969, 1973), 123pp search text
- Tamanoi, Mariko, ed. Crossed Histories: Manchuria in the Age of Empire (2005); p 213; specialized essays by scholars
- Yamamuro, Shin'ichi. Manchuria under Japanese Dominion (U. of Pennsylvania Press, 2006); 335 pages; translation of highly influential Japanese study; excerpt and text search
- review in The Journal of Japanese Studies 34.1 (2007) pp 109–114 online
- Young, Louise (1998). Japan's Total Empire: Manchuria and the Culture of Wartime Imperialism. U. of California Press. | https://en.wikipedia.org/wiki/History_of_Manchuria |
4.0625 | |This article needs additional citations for verification. (January 2007)|
English number words include numerals and various words derived from them, as well as a large number of words borrowed from other languages.
|2||two||12||twelve (a dozen)||20||twenty (a score)|
|4||four||14||fourteen||40||forty (no "u")|
|5||five||15||fifteen (note "f", not "v")||50||fifty (note "f", not "v")|
|8||eight||18||eighteen (only one "t")||80||eighty (only one "t")|
|9||nine||19||nineteen||90||ninety (note the "e")|
If a number is in the range 21 to 99, and the second digit is not zero, one typically writes the number as two words separated by a hyphen.
In English, the hundreds are perfectly regular, except that the word hundred remains in its singular form regardless of the number preceding it.
So too are the thousands, with the number of thousands followed by the word "thousand"
|10,000||ten thousand or (rarely used) a myriad|
|100,000||one hundred thousand or one lakh (Indian English)|
|999,000||nine hundred and ninety-nine thousand (inclusively British English, Irish English, Australian English, and New Zealand English)
nine hundred ninety-nine thousand (American English)
|10,000,000||ten million or one crore (Indian English)|
In American usage, four-digit numbers with non-zero hundreds are often named using multiples of "hundred" and combined with tens and ones: "One thousand one", "Eleven hundred three", "Twelve hundred twenty-five", "Four thousand forty-two", or "Ninety-nine hundred ninety-nine." In British usage, this style is common for multiples of 100 between 1,000 and 2,000 (e.g. 1,500 as "fifteen hundred") but not for higher numbers.
Americans may pronounce four-digit numbers with non-zero tens and ones as pairs of two-digit numbers without saying "hundred" and inserting "oh" for zero tens: "twenty-six fifty-nine" or "forty-one oh five". This usage probably evolved from the distinctive usage for years; "nineteen-eighty-one", or from four-digit numbers used in the American telephone numbering system which were originally two letters followed by a number followed by a four-digit number, later by a three-digit number followed by the four-digit number. It is avoided for numbers less than 2500 if the context may mean confusion with time of day: "ten ten" or "twelve oh four".
Intermediate numbers are read differently depending on their use. Their typical naming occurs when the numbers are used for counting. Another way is for when they are used as labels. The second column method is used much more often in American English than British English. The third column is used in British English but rarely in American English (although the use of the second and third columns is not necessarily directly interchangeable between the two regional variants). In other words, British English and American English can seemingly agree, but it depends on a specific situation (in this example, bus numbers).
|Common British vernacular||Common American vernacular||Common British vernacular|
|"How many marbles do you have?"||"What is your house number?"||"Which bus goes to the high street?"|
|101||"A hundred and one."||"One-oh-one."
Here, "oh" is used for the digit zero.
|109||"A hundred and nine."||"One-oh-nine."||"One-oh-nine."|
|110||"A hundred and ten."||"One-ten."||"One-one-oh."|
|117||"A hundred and seventeen."||"One-seventeen."||"One-one-seven."|
|120||"A hundred and twenty."||"One-twenty."||"One-two-oh", "One-two-zero."|
|152||"A hundred and fifty-two."||"One-fifty-two."||"One-five-two."|
|208||"Two hundred and eight."||"Two-oh-eight."||"Two-oh-eight."|
|334||"Three hundred and thirty-four."||"Three-thirty-four."||"Three-three-four."|
Note: When writing a cheque (or check), the number 100 is always written "one hundred". It is never "a hundred".
In American English, many students are taught not to use the word and anywhere in the whole part of a number, so it is not used before the tens and ones. It is instead used as a verbal delimiter when dealing with compound numbers. Thus, instead of "three hundred and seventy-three", one would say "three hundred seventy-three". Despite this rule, the and is used by some Americans in reading numbers containing tens and ones as an alternative variant. For details, see American and British English differences.
For numbers above a million, there are two different systems for naming numbers in English (for the use of prefixes such as kilo- for a thousand, mega- for a million, milli- for a thousandth, etc. see SI units):
- the long scale (decreasingly used in British English) designates a system of numeric names in which a thousand million is called a ‘‘milliard’’ (but the latter usage is now rare), and ‘‘billion’’ is used for a million million.
- the short scale (always used in American English and increasingly in British English) designates a system of numeric names in which a thousand million is called a ‘‘billion’’, and the word ‘‘milliard’’ is not used.
|Short scale||Long scale||Indian
(or South Asian) English
|1,000,000||106||one million||one million||ten lakh|
a thousand million
a thousand million
|one hundred crore
a thousand billion
a million million
|one lakh crore
a thousand trillion
a thousand billion
|ten crore crore
a thousand quadrillion
a million billion
|ten thousand crore crore
a thousand quintillion
a thousand trillion
|one crore crore crore|
The numbers past a trillion in short scale system, in ascending powers of ten, are as follows: quadrillion, quintillion, sextillion, septillion, octillion, nonillion, decillion, undecillion, duodecillion, tredecillion, quattuordecillion, and quindecillion (that's 10 to the 48th, or a one followed by 48 zeros). The highest number listed on Robert Munafo's table, is a milli-millillion. That's 10 to the 3000003rd.
Although British English has traditionally followed the long-scale numbering system, the short-scale usage has become increasingly common in recent years. For example, the UK Government and BBC websites use the newer short-scale values exclusively.
The terms arab, kharab, padm and shankh are more commonly found in old sections of Indian Mathematics.
Here are some approximate composite large numbers in American English:
|1,200,000||1.2 million||one point two million|
|3,000,000||3 million||three million|
|250,000,000||250 million||two hundred fifty million|
|6,400,000,000||6.4 billion||six point four billion|
|23,380,000,000||23.38 billion||twenty-three point three eight billion|
Often, large numbers are written with (preferably non-breaking) half-spaces or thin spaces separating the thousands (and, sometimes, with normal spaces or apostrophes) instead of commas—to ensure that confusion is not caused in countries where a decimal comma is used. Thus, a million is often written 1 000 000.
A few numbers have special names (in addition to their regular names):
- 0: has several other names, depending on context:
- zero: formal scientific usage
- naught / nought: mostly British usage
- aught: Mostly archaic but still occasionally used when a digit in mid-number is 0 (as in "thirty-aught-six", the .30-06 Springfield rifle cartridge and by association guns that fire it)
- oh: used when spelling numbers (like telephone, bank account, bus line [British: bus route])
- nil: in general sport scores, British usage ("The score is two–nil.")
- nothing: in general sport scores, American usage ("The score is two–nothing.")
- null: used technically to refer to an object or idea related to nothingness. The 0th aleph number () is pronounced "aleph-null".
- love: in tennis, badminton, squash and similar sports (origin disputed, often said to come from French l'œuf, "egg"; but the Oxford English Dictionary mentions the phrase for love, meaning nothing is at risk)
- zilch, nada (from Spanish), zip: used informally when stressing nothingness; this is true especially in combination with one another ("You know nothing—zero, zip, nada, zilch!"); American usage
- nix: also used as a verb; mostly American usage
- cypher / cipher: archaic, from French chiffre, in turn from Arabic sifr, meaning zero
- goose egg (informal)
- duck (used in cricket when a batsman is dismissed without scoring)
- blank the half of a domino tile with no pips
- ace in certain sports and games, as in tennis or golf, indicating success with one stroke, and the face of a die, playing card or domino half with one pip
- birdie in golf denotes one stroke less than par, and bogey, one stroke more than par
- linear the degree of a polynomial is 1; also for explicitly denoting the first power of a unit: linear meter
- unity in mathematics
- protagonist first actor in theater of Ancient Greece
- brace, from Old French "arms" (the plural of arm), as in "what can be held in two arms".
- deuce the face of a die, playing card or domino half with two pips
- eagle in golf denotes two strokes less than par
- quadratic the degree of a polynomial is 2
- also square or squared for denoting the second power of a unit: square meter or meter squared
- penutimate, second from the end
- deuteragonist second actor in theater of Ancient Greece
- trey the face of a die or playing card with three pips, a three-point field goal in basketball, nickname for the third carrier of the same personal name in a family
- trips: three-of-a-kind in a poker hand. a player has three cards with the same numerical value
- cubic the degree of a polynomial is 3
- also cube or cubed for denoting the third power of a unit: cubic meter or meter cubed
- albatross in golf denotes three strokes less than par. Sometimes called double eagle
- hat-trick or hat trick: achievement of three feats in sport or other contexts
- antepenultimate third from the end
- tritagonist third actor in theater of Ancient Greece
- turkey in bowling, three consecutive strikes
- cater: (rare) the face of a die or playing card with four pips
- quartic or biquadratic the degree of a polynomial is 4
- quad (short for quadruple or the like) several specialized sets of four, such as four of a kind in poker, a carburetor with four inputs, etc.,
- condor in golf denotes four strokes less than par
- preantepenultimate fourth from the end
- cinque or cinq (rare) the face of a die or playing card with five pips
- nickel (informal American, from the value of the five-cent US nickel, but applied in non-monetary references)
- quintic the degree of a polynomial is 5
- quint (short for quintuplet or the like) several specialized sets of five, such as quintuplets, etc.
- 11: a banker's dozen
- 12: a dozen (first power of the duodecimal base), used mostly in commerce
- 13: a baker's dozen
- 20: a score (first power of the vigesimal base), nowadays archaic; famously used in the opening of the Gettysburg Address: "Four score and seven years ago..." The Number of the Beast in the King James Bible is rendered "Six hundred threescore and six". Also in The Book of Common Prayer, Psalm 90 as used in the Burial Service - "The days of our age are threescore years and ten; ...."
- 50: half a century, literally half of a hundred, usually used in cricket scores. Normally referred to as a 'half-century' without the 'a'.
- 60: a shock: historical commercial count, described as "three scores".
- 110: eleventy (as 11 × 10)
- A great hundred or long hundred (twelve tens; as opposed to the small hundred, i.e. 100 or ten tens), also called small gross (ten dozens), both archaic
- Also sometimes referred to as duodecimal hundred, although that could literally also mean 144, which is twelve squared
- Twelfty or twelvety (as 12 × 10)
- 144: a gross (a dozen dozens, second power of the duodecimal base), used mostly in commerce
- a grand, colloquially used especially when referring to money, also in fractions and multiples, e.g. half a grand, two grand, etc. Grand can also be shortened to "G" in many cases.
- K, originally from the abbreviation of kilo-, e.g. "He only makes $20K a year."
- 1728: a great gross (a dozen gross, third power of the duodecimal base), used mostly in commerce
- 10,000: a myriad (a hundred hundred), commonly used in the sense of an indefinite very high number
- 100,000: a lakh (a hundred thousand), loanword used mainly in Indian English
- 10,000,000: a crore (a hundred lakh), loanword used mainly in Indian English
- 10100: googol (1 followed by 100 zeros), used in mathematics; not to be confused with the name of the company Google (which was originally a misspelling of googol)
- 10googol: googolplex (1 followed by a googol of zeros)
- 10googolplex: googolplexplex (1 followed by a googolplex of zeros)
Combinations of numbers in most sports scores are read as in the following examples:
- 1–0 British English: one-nil; American English: one-nothing, one-zip, or one-zero
- 0–0 British English: nil-nil, or more rarely nil all; American English: zero-zero or nothing-nothing, (occasionally scoreless or no score)
- 2–2 two-two or two all; American English also twos, two to two, even at two, or two up.
Naming conventions of Tennis scores (and related sports) are different from other sports.
A few numbers have specialised multiplicative numbers expresses how many times some event happens (adverbs):
Compare these specialist multiplicative numbers to express how many times some thing exists (adjectives):
Other examples are given in the Specialist Numbers.
The name of a negative number is the name of the corresponding positive number preceded by "minus" or (American English) "negative". Thus −5.2 is "minus five point two" or "negative five point two". For temperatures, North Americans colloquially say "below" — short for "below zero" — so a temperature of −5° is "five below" (in contrast, for example, to "two above" for 2°, occasionally used for emphasis when referring to several temperatures or ranges both positive and negative. This is particularly common in Canada where the use of Celsius in weather forecasting means that temperatures can regularly drift above and below zero at certain times of year.)
|Look up Appendix:English ordinal numbers in Wiktionary, the free dictionary.|
Ordinal numbers refer to a position in a series. Common ordinals include:
|0th||zeroth or noughth (see below)||10th||tenth|
|2nd||second||12th||twelfth (note "f", not "v")||20th||twentieth|
|8th||eighth (only one "t")||18th||eighteenth||80th||eightieth|
|9th||ninth (no "e")||19th||nineteenth||90th||ninetieth|
Ordinal numbers such as 21st, 33rd, etc., are formed by combining a cardinal ten with an ordinal unit.
Higher ordinals are not often written in words, unless they are round numbers (thousandth, millionth, billionth). They are written using digits and letters as described below. Here are some rules that should be borne in mind.
- The suffixes -th, -st, -nd and -rd are occasionally written superscript above the number itself.
- If the tens digit of a number is 1, then write "th" after the number. For example: 13th, 19th, 112th, 9,311th.
- If the tens digit is not equal to 1, then use the following table:
|If the units digit is:||0||1||2||3||4||5||6||7||8||9|
|write this after the number||th||st||nd||rd||th||th||th||th||th||th|
- For example: 2nd, 7th, 20th, 23rd, 52nd, 135th, 301st.
These ordinal abbreviations are actually hybrid contractions of a numeral and a word. 1st is "1" + "st" from "first". Similarly, "nd" is used for "second" and "rd" for "third". In the legal field and in some older publications, the ordinal abbreviation for "second" and "third" is simply "d".
- For example: 42d, 33d, 23d.
NB: The practice of using "d" to denote "second" and "third" is still often followed in the numeric designations of units in the US armed forces, for example, 533d Squadron.
Any ordinal name that doesn't end in "first", "second", or "third", ends in "th".
There are a number of ways to read years. The following table offers a list of valid pronunciations and alternate pronunciations for any given year of the Gregorian calendar.
|Year||Most common pronunciation method||Alternative methods|
|1 BC||(The year) One Before Christ (BC)||1 before the Common era (BCE)|
|1||(The year) One Anno Domini (AD)||of the Common era (CE)
In the year of Our Lord 1
Two hundred (and) thirty-five
Nine hundred (and) eleven
Nine hundred (and) ninety-nine
|1000||One thousand||Ten hundred
|1004||One thousand (and) four||Ten oh-four|
|1010||Ten ten||One thousand (and) ten|
|1050||Ten fifty||One thousand (and) fifty|
One thousand, two hundred (and) twenty-five
|1900||Nineteen hundred||One thousand, nine hundred
|1901||Nineteen oh-one||Nineteen hundred (and) one
One thousand, nine hundred (and) one
Nineteen aught one
|1919||Nineteen nineteen||Nineteen hundred (and) nineteen
One thousand, nine hundred (and) nineteen
|1999||Nineteen ninety-nine||Nineteen hundred (and) ninety-nine
One thousand, nine hundred (and) ninety-nine
|2000||Two thousand||Twenty hundred
|2001||Two thousand (and) one||Twenty oh-one
Twenty hundred (and) one
|2009||Two thousand (and) nine||Twenty oh-nine
Twenty hundred (and) nine
|2010||Two thousand (and) ten
|Twenty hundred (and) ten
Fractions and decimals
In spoken English, ordinal numbers are also used to quantify the denominator of a fraction. Thus "fifth" can mean the element between fourth and sixth, or the fraction created by dividing the unit into five pieces. In this usage, the ordinal numbers can be pluralized: one seventh, two sevenths. The sole exception to this rule is division by two. The ordinal term "second" can only refer to location in a series; for fractions English speakers use the term 'half' (plural "halves").
|1/10 or 0.1||one tenth|
|2/10 or 0.2||two tenths|
|1/4||one quarter or (mainly American English) one fourth|
|3/10 or 0.3||three tenths|
|4/10 or 0.4||four tenths|
|6/10 or 0.6||six tenths|
|7/10 or 0.7||seven tenths|
|3/4||three quarters or three fourths|
|8/10 or 0.8||eight tenths|
|9/10 or 0.9||nine tenths|
Alternatively, and for greater numbers, one may say for 1/2 "one over two", for 5/8 "five over eight", and so on. This "over" form is also widely used in mathematics.
Numbers with a decimal point may be read as a cardinal number, then "and", then another cardinal number followed by an indication of the significance of the second cardinal number (mainly U.S.); or as a cardinal number, followed by "point", and then by the digits of the fractional part. The indication of significance takes the form of the denominator of the fraction indicating division by the smallest power of ten larger than the second cardinal. This is modified when the first cardinal is zero, in which case neither the zero nor the "and" is pronounced, but the zero is optional in the "point" form of the fraction.
- 0.002 is "two thousandths" (mainly U.S.); or "point zero zero two", "point oh oh two", "nought point zero zero two", etc.
- 3.1416 is "three point one four one six"
- 99.3 is "ninety-nine and three tenths" (mainly U.S.); or "ninety-nine point three".
In English the decimal point was originally printed in the center of the line (0·002), but with the advent of the typewriter it was placed at the bottom of the line, so that a single key could be used as a full stop/period and as a decimal point. In many non-English languages a full-stop/period at the bottom of the line is used as a thousands separator with a comma being used as the decimal point.
Fractions together with an integer are read as follows:
- 1 1/2 is "one and a half"
- 6 1/4 is "six and a quarter"
- 7 5/8 is "seven and five eighths"
A space is required between the whole number and the fraction; however, if a special fraction character is used like "½", then the space can be done without, e.g.
- 9 1/2
Whether to use digits or words
With very little deviation, most grammatical texts rule that the numbers zero to nine inclusive should be "written out" – meaning instead of "1" and "2", one would write "one" and "two".
- Example: "I have two apples." (Preferred)
- Example: "I have 2 apples."
After "nine", one can head straight back into the 10, 11, 12, etc., although some write out the numbers until "twelve".
- Example: "I have 28 grapes." (Preferred)
- Example: "I have twenty-eight grapes."
Another common usage is to write out any number that can be expressed as one or two words, and use figures otherwise.
- "There are six million dogs." (Preferred)
- "There are 6,000,000 dogs."
- "That is one hundred and twenty-five oranges." (British English)
- "That is one hundred twenty-five oranges." (US-American English)
- "That is 125 oranges." (Preferred)
Numbers at the beginning of a sentence should also be written out.
The above rules are not always used. In literature, larger numbers might be spelled out. On the other hand, digits might be more commonly used in technical or financial articles, where many figures are discussed. In particular, the two different forms should not be used for figures that serve the same purpose; for example, it is inelegant to write, "Between day twelve and day 15 of the study, the population doubled."
Colloquial English has a small vocabulary of empty numbers that can be employed when there is uncertainty as to the precise number to use, but it is desirable to define a general range: specifically, the terms "umpteen", "umpty", and "zillion". These are derived etymologically from the range affixes:
- "-teen" (designating the range as being between 10 and 20)
- "-ty" (designating the range as being in one of the decades between 20 and 100)
- "-illion" (designating the range as being above 1,000,000; or, more generally, as being extremely large).
The prefix "ump-" is added to the first two suffixes to produce the empty numbers "umpteen" and "umpty": it is of uncertain origin. There is a noticeable absence of an empty number in the hundreds range.
Usage of empty numbers:
- The word "umpteen" may be used as an adjective, as in "I had to go to umpteen stores to find shoes that fit." It can also be used to modify a larger number, usually "million", as in "Umpteen million people watched the show; but they still cancelled it."
- "Umpty" is not in common usage. It can appear in the form "umpty-one" (parallelling the usage in such numbers as "twenty-one"), as in "There are umpty-one ways to do it wrong." "Umpty-ump" is also heard, though "ump" is never used by itself.
- The word "zillion" may be used as an adjective, modifying a noun. The noun phrase normally contains the indefinite article "a", as in "There must be a zillion sites on the World Wide Web."
- The plural "zillions" designates a number indefinitely larger than "millions" or "billions". In this case, the construction is parallel to the one for "millions" or "billions", with the number used as a plural count noun, followed by a prepositional phrase with "of", as in "Out in the countryside, the night sky is filled with zillions of stars."
- Empty numbers are sometimes made up, with obvious meaning: "squillions" is obviously an empty, but very large, number; a "squintillionth" would be a very small number.
- Some empty numbers may be modified by actual numbers, such as "four zillion", and are used for jest, exaggeration, or to relate abstractly to actual numbers.
- Empty numbers are colloquial, and primarily used in oral speech or informal contexts. They are inappropriate in formal or scholarly usage.
See also Placeholder name.
- Indefinite and fictitious numbers
- List of numbers
- Long and short scales
- Names of large numbers
- Number prefixes and their derivatives
- Natural number
- "Hat trick, n.". Oxford English Dictionary. Oxford University Press. Retrieved 26 December 2014.
- "Shock, n.2". Oxford English Dictionary. Oxford University Press. Retrieved 26 December 2014.
- What is a partitive numeral?
- Gary Blake and Robert W. Bly, The Elements of Technical Writing, pg. 22. New York: Macmillan Publishers, 1993. ISBN 0020130856
|Look up Appendix:English numerals in Wiktionary, the free dictionary.|
- English Numbers - explanations, exercises and number generator (cardinal and ordinal numbers) | https://en.wikipedia.org/wiki/Names_of_numbers_in_English |
4.09375 | South Carolina in the American Civil War
|State of South Carolina|
|Admission to Confederacy||February 4, 1861 (1st)|
* 301,302 free
* 402,406 slave
|Forces supplied||23% of white population Total
|Major garrisons/armories||Fort Sumter, Charleston Harbor|
|Governor||Francis Pickens (1860-1862)
|Senators||Robert Woodward Barnwell
James Lawrence Orr
|Restored to the Union||July 9, 1868|
Part of a series on the
|History of South Carolina|
|South Carolina portal|
American Civil War
South Carolina was a site of a major political and military importance for the Confederacy during the American Civil War. The white population of the state strongly supported the institution of slavery long before the war, since the 18th century. Political leaders such as Democrats John Calhoun and Preston Brooks had inflamed regional and national passions in support of the institution, and for years before the eventual start of the Civil War in 1861, pro-slavery voices cried for secession.
The Civil War began in South Carolina. On December 20, 1860, South Carolina, having the highest percentage of slaves of any U.S. state at 57% of its population enslaved and 46% of its families owning at least one slave, became the first state to declare that it had secessed from the Union. The first shots of the Civil War (January 9, 1861) were fired in Charleston by its Citadel cadets upon a U.S. civilian merchant ship, Star of the West, bringing supplies to the beleaguered U.S. garrison at Fort Sumter. The April 1861 bombardment of Fort Sumter by South Carolinian forces under the command of General Beauregard—the Confederacy did not yet have a functioning army—is commonly regarded as the beginning of the war.
South Carolina was a source of troops for the Confederate army, and as the war progressed, also for the Union, as thousands of ex-slaves flocked to join the Union forces. The state also provided uniforms, textiles, food, and war material, as well as trained soldiers and leaders from The Citadel and other military schools. In contrast to most other Confederate states, South Carolina had a well-developed rail network linking all of its major cities without a break of gauge. Relatively free from Union occupation until the very end of the war, South Carolina hosted a number of prisoner of war camps. South Carolina also was the only Confederate state not to harbor pockets of anti-secessionist fervor strong enough to send large amounts of white men to fight for the Union, as every other state in the Confederacy did.
Among the leading generals from the Palmetto State were Wade Hampton III, one of the Confederacy's leading cavalrymen, Maxcy Gregg, killed in action at Fredericksburg, Joseph B. Kershaw, whose South Carolina infantry brigade saw some of the hardest fighting of the Army of Northern Virginia and James Longstreet who served in that army under Robert E. Lee and in the Army of Tennessee under Gen. Braxton Bragg.
For decades, South Carolinian political leaders had promoted regional passions with threats of nullification and secession in the name of southern states rights and protection of the interests of the slave power.
Alfred P. Aldrich, a South Carolinian politician from Barnwell, stated that declaring secession would be necessary if a Republican candidate were to win the 1860 U.S. presidential election, stating that it was the only way for the state to preserve slavery and diminish the influence of the anti-slavery Republican Party, which, were its goals of abolition realized, would result in the "destruction of the South":
If the Republican party with its platform of principles, the main feature of which is the abolition of slavery and, therefore, the destruction of the South, carries the country at the next Presidential election, shall we remain in the Union, or form a separate Confederacy? This is the great, grave issue. It is not who shall be President, it is not which party shall rule – it is a question of political and social existence.— Alfred P. Aldrich,
In a January 1860 speech, South Carolinian congressman Laurence Massillon Keitt, summed up this view in an oratory condemning the Republican Party for its anti-slavery views, claiming that slavery was not morally wrong, but rather, justified:
Later that year, in December, Keitt would state that South Carolina's declaring of secession was the direct result of slavery:
On November 9, 1860 the South Carolina General Assembly passed a "Resolution to Call the Election of Abraham Lincoln as U.S. President a Hostile Act" and stated its intention to declare secession from the United States.
In December 1860, amid the secession crisis, former South Carolinian congressman John McQueen wrote to a group of civic leaders in Richmond, Virginia, regarding the reasons as to why South Carolina was contemplating secession from the Union. In the letter, McQueen claimed that U.S. president-elect Abraham Lincoln supported equality and civil rights for African Americans as well as the abolition of slavery, and thus South Carolina, being opposed to such measures, was compelled to secede:
I have never doubted what Virginia would do when the alternatives present themselves to her intelligent and gallant people, to choose between an association with her sisters and the dominion of a people, who have chosen their leader upon the single idea that the African is equal to the Anglo-Saxon, and with the purpose of placing our slaves on equality with ourselves and our friends of every condition! and if we of South Carolina have aided in your deliverance from tyranny and degradation, as you suppose, it will only the more assure us that we have performed our duty to ourselves and our sisters in taking the first decided step to preserve an inheritance left us by an ancestry whose spirit would forbid its being tarnished by assassins. We, of South Carolina, hope soon to great you in a Southern Confederacy, where white men shall rule our destinies, and from which we may transmit to our posterity the rights, privileges and honor left us by our ancestors.
South Carolinian religious leader James Henley Thornwell also espoused a similar view to McQueen's, stating that slavery was justified under the Christian religion, and thus, those who viewed slavery as being immoral were opposed to Christianity:
The parties in the conflict are not merely abolitionists and slaveholders. They are atheists, socialists, communists, red republicans, Jacobins on the one side, and friends of order and regulated freedom on the other. In one word, the world is the battleground – Christianity and Atheism the combatants; and the progress of humanity at stake.
On November 10, 1860 the S.C. General Assembly called for a "Convention of the People of South Carolina" to consider secession. Delegates were to be elected on December 6. The secession convention convened in Columbia on December 17 and voted unanimously, 169-0, to declare secession from the United States. The convention then adjourned to Charleston to draft an ordinance of secession. When the ordinance was adopted on December 20, 1860, South Carolina became the first slave state in the south to declare that it had seceded from the United States. James Buchanan, the United States president, declared the ordinance illegal but did not act to stop it.
A committee of the convention also drafted a Declaration of the Immediate Causes Which Induce and Justify the Secession of South Carolina which was adopted on December 24. The secession declaration stated the primary reasoning behind South Carolina's declaring of secession from the Union, which was described as:
...increasing hostility on the part of the non-slaveholding States to the Institution of Slavery ...— Declaration of the Immediate Causes Which Induce and Justify the Secession of South Carolina, (December 24, 1860).
The declaration also claims that secession was declared as a result of the refusal of free states to enforce the Fugitive Slave Acts. Although the declaration does argue that secession is justified on the grounds of U.S. "encroachments upon the reserved rights of the States," the grievances that the declaration goes on to list are mainly concerned with the property of rights of slave holders. Broadly speaking, the declaration argues that the U.S. Constitution was framed to establish each State "as an equal" in the Union, with "separate control over its own institutions", such as "the right of property in slaves."
We affirm that these ends for which this Government was instituted have been defeated, and the Government itself has been made destructive of them by the action of the non-slaveholding States. Those States have assumed the right of deciding upon the propriety of our domestic institutions; and have denied the rights of property established in fifteen of the States and recognized by the Constitution; they have denounced as sinful the institution of Slavery; they have permitted the open establishment among them of societies, whose avowed object is to disturb the peace and to eloign the property of the citizens of other States. They have encouraged and assisted thousands of our slaves to leave their homes; and those who remain, have been incited by emissaries, books and pictures to servile insurrection.
A repeated concern is runaway slaves. The declaration argues that parts of the U.S. Constitution were specifically written to ensure the return of slaves who had escaped to other states, and quotes the 4th Article: "No person held to service or labor in one State, under the laws thereof, escaping into another, shall, in consequence of any law or regulation therein, be discharged from such service or labor, but shall be delivered up, on claim of the party to whom such service or labor may be due." The declaration goes on to state that this stipulation of the Constitution was so important to the original signers, "that without it that compact [the Constitution] would not have been made." Laws from the "General Government" upheld this stipulation "for many years," the declaration says, but "an increasing hostility on the part of the non-slaveholding States to the Institution of Slavery has led to a disregard of their obligations." Because the constitutional agreement had been "deliberately broken and disregarded by the non-slaveholding States," the consequence was that "South Carolina is released from her obligation" to be part of the Union.
A further concern was Lincoln's recent election to the presidency, whom they claimed desired to see slavery on "the course of ultimate extinction":
A geographical line has been drawn across the Union, and all the States north of that line have united in the election of a man to the high office of President of the United States whose opinions and purposes are hostile to slavery. He is to be entrusted with the administration of the Common Government, because he has declared that that "Government cannot endure permanently half slave, half free," and that the public mind must rest in the belief that Slavery is in the course of ultimate extinction.
The South Carolinian secession declaration of December 1860 also channeled some elements from the U.S. Declaration of Independence from July 1776. However, the South Carolinian version omitted the phrases that "all men are created equal" and "that they are endowed by their Creator with certain unalienable Rights". Professor and historian Harry V. Jaffa noted these omissions as significant in his 2000 book, A New Birth of Freedom: Abraham Lincoln and the Coming of the Civil War:
South Carolina cites, loosely, but with substantial accuracy, some of the language of the original Declaration. That Declaration does say that it is the right of the people to abolish any form of government that becomes destructive of the ends for which it was established. But South Carolina does not repeat the preceding language in the earlier document: 'We hold these truths to be self-evident, that all men are created equal'...
The following day, on December 25, a South Carolinian convention delivered an "Address to the Slaveholding States":
We prefer, however, our system of industry, by which labor and capital are identified in interest, and capital, therefore, protects labor–by which our population doubles every twenty years–by which starvation is unknown, and abundance crowns the land–by which order is preserved by unpaid police, and the most fertile regions of the world, where the white man cannot labor, are brought into usefulness by the labor of the African, and the whole world is blessed by our own productions. ... We ask you to join us, in forming a Confederacy of Slaveholding States.— Convention of South Carolina, Address of the people of South Carolina to the people of the Slaveholding States, (December 25, 1860)
"Slavery, not states' rights, birthed the Civil War," argues sociologist James W. Loewen. Writing of South Carolina's Declaration of Secession, Loewen writes that
South Carolina was further upset that New York no longer allowed "slavery transit." In the past, if Charleston gentry wanted to spend August in the Hamptons, they could bring their cook along. No longer — and South Carolina's delegates were outraged. In addition, they objected that New England states let black men vote and tolerated abolitionist societies. According to South Carolina, states should not have the right to let their citizens assemble and speak freely when what they said threatened slavery.
Other seceding states echoed South Carolina. "Our position is thoroughly identified with the institution of slavery — the greatest material interest of the world," proclaimed Mississippi in its own secession declaration, passed Jan. 9, 1861. "Its labor supplies the product which constitutes by far the largest and most important portions of the commerce of the earth. . . . A blow at slavery is a blow at commerce and civilization."
The state adopted the palmetto flag as its banner, a slightly modified version of which is used as its current state flag. South Carolina after secession was frequently called the "Palmetto Republic".
After South Carolina declared its secession, former congressman James L. Petigru famously remarked, "South Carolina is too small for a republic and too large for an insane asylum." Soon afterwards, South Carolina began preparing for a presumed U.S. military response while working to convince other southern states to secede as well and join in a confederacy of southern states.
On February 4, 1861, in Montgomery, Alabama, a convention consisting of delegates from South Carolina, Florida, Alabama, Mississippi, Georgia, and Louisiana met to form a new constitution and government modeled on that of the United States. On February 8, 1861, South Carolina officially joined the Confederacy. According to one South Carolinian newspaper editor:
The South is now in the formation of a Slave Republic...— L.W. Spratt, The Philosophy of Secession: A Southern View, (February 13, 1861).
South Carolina's declaring of secession was supported by the state's religious figures, who claimed that it was consistent with their religion:
The triumphs of Christianity rest this very hour upon slavery; and slavery depends on the triumphs of the South... This war is the servant of slavery.— John T. Wightman, The Glory of God, the Defence of the South, (1861).
American Civil War
Six days after secession, on the day after Christmas, Major Robert Anderson, commander of the U.S. troops in Charleston, withdrew his men to the island fortress of Fort Sumter in Charleston Harbor. South Carolina militia swarmed over the abandoned mainland batteries and trained their guns on the island. Sumter was the key position for preventing a naval attack upon Charleston, so secessionists were determined not to allow U.S. forces to remain there indefinitely. More importantly, South Carolina's claim of independence would look empty if U.S. forces controlled its largest harbor. On January 9, 1861, the U.S. ship Star of the West approached to resupply the fort. Cadets from The Citadel, The Military College of South Carolina fired upon the Star of the West, striking the ship three times and causing it to retreat back to New York.
Mississippi declared its secession several weeks after South Carolina, and five other states of the lower South soon followed. Both the outgoing Buchanan administration and President-elect Lincoln denied that any state had a right to secede. On February 4, a congress of the seven seceding states met in Montgomery, Alabama, and approved a new constitution for the Confederate States of America. South Carolina entered the Confederacy on February 8, 1861, fewer than six weeks after declaring itself the independent State of South Carolina.
Upper Southern slave states such as Virginia and North Carolina, which had initially voted against secession, called a peace conference, to little effect. Meanwhile, Virginian orator Roger Pryor barreled into Charleston and proclaimed that the only way to get his state to join the Confederacy was for South Carolina to instigate war with the United States. The obvious place to start was right in the midst of Charleston Harbor.
On April 10, the Mercury reprinted stories from New York papers that told of a naval expedition that had been sent southward toward Charleston. Lincoln advised the governor of South Carolina that the ships were sent to resupply the fort, not to reinforce it. The Carolinians could no longer wait if they hoped to take the fort before the U.S. Navy arrived. About 6,000 men were stationed around the rim of the harbor, ready to take on the 60 men in Fort Sumter. At 4:30 a.m. on April 12, after two days of intense negotiations, and with Union ships approaching the harbor, the firing began. Students from The Citadel were among those firing the first shots of the war, though Edmund Ruffin is usually credited with firing the first shot. Thirty-four hours later, Anderson's men raised the white flag and were allowed to leave the fort with colors flying and drums beating, saluting the U.S. flag with a 50-gun salute before taking it down. During this salute, one of the guns exploded, killing a young soldier—the only casualty of the bombardment and the first casualty of the war.
In December 1861, South Carolina received $100,000 from Georgia after a disastrous fire in Charleston.
The war ends
The Confederacy was at a disadvantage in number, weaponry, and maritime skills, as few southerners were sailors before the war. Union ships sailed south and blocked off one port after another. As early as November, Union troops occupied the Sea Islands in the Beaufort area, establishing an important base for the men and ships who would obstruct the ports at Charleston and Savannah. When the plantation owners, many of which had already gone off with the Confederate army elsewhere, fled the area, the Sea Island slaves became the first "freedmen" of the war, and the Sea Islands became the laboratory for Union plans to educate the African Americans for their eventual role as full American citizens Despite South Carolina's important role in the start of the war, and a long unsuccessful attempt to take Charleston from 1863 onward, few military engagements occurred within the state's borders until 1865, when Sherman's Army, having already completed its March to the Sea in Savannah, marched to Columbia and leveled most of the town, as well as a number of towns along the way and afterward. South Carolina lost 12,922 men to the war, 23% of its male white population of fighting age, and the highest percentage of any state in the nation. Sherman's 1865 march through the Carolinas resulted in the burning of Columbia and numerous other towns. The destruction his troops wrought upon South Carolina was even worse than in Georgia, because many of his men bore a particular grudge against the state and its citizens, who they blamed for starting the war. One of Sherman's men declared, "Here is where treason began and, by God, here is where it shall end!" Poverty would mark the state for generations to come.
In January 1865, the Charleston Courier newspaper condemned suggestions that the Confederacy abandon slavery were it to help in gaining independence, stating that such suggestions were "folly":
To talk of maintaining our independence while we abolish slavery is simply to talk folly.— Courier, (January 24, 1865)
On February 21, 1865, with the Confederate forces finally evacuated from Charleston, the black 54th Massachusetts Regiment marched through the city. At a ceremony at which the U.S. flag was once again raised over Fort Sumter, former fort commander Robert Anderson was joined on the platform by two men: African American Union hero Robert Smalls and the son of Denmark Vesey.
Battles in South Carolina
- Battle of Fort Sumter
- Battle of Port Royal
- Battle of Secessionville
- Battle of Simmon's Bluff
- First Battle of Charleston Harbor
- Second Battle of Charleston Harbor
- Second Battle of Fort Sumter
- First Battle of Fort Wagner
- Battle of Grimball's Landing
- Second Battle of Fort Wagner (Morris Island)
- Battle of Honey Hill
- Battle of Tulifinny
- Battle of Rivers' Bridge
- Battle of Anderson County
- Battle of Brattonsville
- Battle of Broxton's Bridge
- Battle of Cheraw
- Battle of Gamble's Hotel (The Columns)
- Battle of Aiken
- Charleston, South Carolina in the American Civil War
- Confederate States of America - animated map of state secession and confederacy
- List of South Carolina Confederate Civil War units
- List of South Carolina Union Civil War units
- Military history of African Americans in the American Civil War
- Origins of the American Civil War
- Slaves and the American Civil War
- "Results from the 1860 Census". 1860 United States Census. 1860. Retrieved June 4, 2004.
- Hall, Andy (December 22, 2013). "Not Surprising, Part Deux". Dead Confederates: A Civil War Era Blog.
The states with the largest proportions of slaves and slave-holders seceded earliest.
- Channing, Steven. Crisis of Fear. pp. 141–142. Retrieved September 6, 2015.
- Keitt, Lawrence M. (January 25, 1860). Congressman from South Carolina, in a speech to the House. Taken from a photocopy of the Congressional Globe, supplied by Steve Miller.
The anti-slavery party contends that slavery is wrong in itself, and the Government is a consolidated national democracy. We of the South contend that slavery is right, and that this is a confederate Republic of sovereign States.
- "The Charleston Courier". Charleston, South Carolina. December 22, 1860. Retrieved September 6, 2015.
- "Resolution to Call the Election of Abraham Lincoln as U.S. President a Hostile Act and to Communicate to Other Southern States South Carolina's Desire to Secede from the Union." 9 November 1860. Resolutions of the General Assembly, 1779-1879. S165018. South Carolina Department of Archives and History, Columbia, S.C.
- McQueen, John (December 24, 1860). "Correspondence to T. T. Cropper and J. R. Crenshaw". Washington, D.C. Retrieved March 25, 2015.
- Rhea, Gordon (January 25, 2011). "Why Non-Slaveholding Southerners Fought". Civil War Trust. Civil War Trust. Retrieved March 21, 2011.
- Cauthen, Charles Edward; Power, J. Tracy. South Carolina goes to war, 1860-1865. Columbia, SC: University of South Carolina Press, 2005. Originally published: Chapel Hill, NC: University of North Carolina Press, 1950. ISBN 978-1-57003-560-9. p. 60.
- "'Declaration of the Immediate Causes Which Induce and Justify the Secession of South Carolina from the Federal Union,' 24 December 1860". Teaching American History in South Carolina Project. 2009. Retrieved November 18, 2012.
- Jaffa, Harry V. (2000). A New Birth of Freedom: Abraham Lincoln and the Coming of the Civil War. Rowman & Littlefield Publishers. p. 231.
- State of South Carolina (December 25, 1860). "Address of the people of South Carolina to the people of the Slaveholding States of the United States". Retrieved March 27, 2015.
- Loewen, James (2011). "Five Myths About Why the South Seceded". Washington Post.
- Edgar, Walter. South Carolina: A History, Columbia, SC: University of South Carolina Press:1998. ISBN 978-1-57003-255-4. p. 619
- Cauthen, Charles Edward; Power, J. Tracy. South Carolina goes to war, 1860-1865. Columbia, SC: University of South Carolina Press, 2005. Originally published: Chapel Hill, NC: University of North Carolina Press, 1950. ISBN 978-1-57003-560-9. p. 79.
- Burger, Ken (February 13, 2010). "Too large to be an asylum". The Post and Courier (Charleston, South Carolina: Evening Post Publishing Co). Retrieved April 22, 2010. Paragraph 4
- Lee, Jr., Charles Robert. The Confederate Constitutions. Chapel Hill, NC: The University of North Carolina Press, 1963, 60.
- Spratt, L.W. (February 13, 1861). "THE PHILOSOPHY OF SECESSION: A SOUTHERN VIEW". South Carolina. Retrieved September 13, 2015.
Presented in a Letter addressed to the Hon. Mr. Perkins of Louisiana, in criticism on the Provisional Constitution adopted by the Southern Congress at Montgomery, Alabama, BY THE HON. L. W. SPRATT, Editor of the Charleston Mercury, 13th February, 1861.
- Wightman, John T. (1861). "The Glory of God, the Defence of the South". Yorkville, South Carolina. Retrieved September 8, 2015.
- McPherson, James M. This Mighty Scourge: Perspectives on the Civil War. Oxford University Press, 2009
- "Courier". Charleston. January 24, 1865. Retrieved September 8, 2015.
- Burger, Ken (February 13, 2010). "Too large to be an asylum". The Post and Courier (Charleston, South Carolina: Evening Post Publishing Co). Retrieved April 22, 2010..
- Cauthen, Charles Edward; Power, J. Tracy. South Carolina goes to war, 1860-1865. Columbia, SC: University of South Carolina Press, 2005. Originally published: Chapel Hill, NC: University of North Carolina Press, 1950. ISBN 978-1-57003-560-9.
- Edgar, Walter. South Carolina: A History, Columbia, SC: University of South Carolina Press:1998. ISBN 978-1-57003-255-4.
- Rogers Jr. George C. and C. James Taylor. A South Carolina Chronology, 1497-1992 2nd Ed. (1994)
- Wallace, David Duncan. South Carolina: A Short History, 1520-1948 (1951) standard scholarly history
- WPA. South Carolina: A Guide to the Palmetto State (1941)
- Wright, Louis B. South Carolina: A Bicentennial History' (1976)
- Declaration of the Immediate Causes Which Induce and Justify the Secession of South Carolina from the Federal Union
|Wikiquote has quotations related to: American Civil War|
|Wikiquote has quotations related to: Confederate States of America| | https://en.wikipedia.org/wiki/South_Carolina_in_the_American_Civil_War |
4.375 | At a Glance - Quadratic Inequalities
Remember back when we looked at linear inequalities, that we said, "we promise we'll try to make your brain hurt more later?" Well grab an ice pack and strap in, because now we're going to look at quadratic inequalities.
Solve x2 – 5x < -4.
When we solve an inequality, what we want are all of the values of x that make the statement true. So our answers won't be single values, but large, sweeping regions of number space. You can then fence off those regions and raise cows on them.
We here at Shmoop love the equal sign. It's a good thing that the first step of solving an inequality is to pretend that the inequality is an equal sign. Set the equation "equal" to zero, and then solve to find the roots of the equation. They'll come in handy in a moment.
x2 – 5x < -4
x2 – 5x + 4 < 0
(x – 1)(x – 4) < 0
Okay, our roots are x = 1 and x = 4. So what? Take a look at the graph of this equation.
A parabola is a smooth, continuous curve. The only places that it can possibly change sign (from above zero to below, or vice versa) are at the roots. We'll use this to help us find our solutions.
Hey, waitjustaminutehere! Couldn't we just graph the equation and solve it visually? We could, but there are two good reasons not to. First, it will often be just as or more difficult to graph the equation than it will be to solve it the other way (see: the next sample problem). Second, we can also use this technique to solve all kinds of polynomial inequalities, not just quadratic ones (see: the sample problem after that).
Anyway, back to solving. We'll now set up our roots on a number line, like so.
We now have three regions fenced off. We need to pick a point from each region to check what whether it is positive or negative within that region. Those regions that are negative will be our solutions. Afterwards we'll put our cows in the positive regions, to boost cow morale.
All the values of x between 1 and 4 will cause the equation to be negative. So our solutions are 1 < x < 4. If you look back at the graph of the equation, you'll see that this is the region where it dips down below zero.
Solve -2x2 ≤ 6x + 1.
We again start off by getting all our stuff on one side of the equation, leaving a big fat zero on the other side of the inequality.
2x2 + 6x + 1 ≥ 0
Last time we had a nice, factorable equation to work with. Not this time, bucko. Now we need to use the quadratic formula to find our roots.
Our calculator tells us that these are x = -0.177 and x = -2.823. Now let's set up the number line and check the signs of each region.
We want the regions that are greater than zero, so the solutions are
-∞ < x ≤ -2.823 and -0.177 ≤ x < ∞
So, is your brain starting to hurt? This next one is the last problem here, so stick with it a little longer.
Solve (x + 3)2(3x2 – 6) < 0.
This equation definitely isn't quadratic, but the method for finding the solutions is the same. It is even factored already, making things easier than they could have been. The roots are x = -3, , and . The number line looks like
The equation is less than zero when . When working with polynomials larger than the quadratics there can be more than two roots, and we need to check the sign of every region. For every inequality, the sign won't necessarily follow a predictable pattern from one root to the next. It's as random as a corn syrup Huckleberry sauce. …How did you like that? Is your brain throbbing with knowledge? That doesn't sound pleasant, but at least you're a bit smarter from the experience.
Solve the inequality 3x2 – 8x + 4 > 0.
Solve the inequality -x2 – 4x > 3.
Solve the inequality -2x2 + 8x + 8 ≤ 0. | http://www.shmoop.com/quadratic-formula-function/quadratic-inequality-help.html |
4.21875 | We live in a galaxy known as the Milky Way – a vast conglomeration of 300 billion stars, planets whizzing around them, and clouds of gas and dust floating in between.
Though it has long been known that the Milky Way and its orbiting companion Andromeda are the dominant members of a small group of galaxies, the Local Group, which is about 3 million light years across, much less was known about our immediate neighbourhood in the universe.
Now, a new paper by York University Physics & Astronomy Professor Marshall McCall, published today in the Monthly Notices of the Royal Astronomical Society, maps out bright galaxies within 35-million light years of the Earth, offering up an expanded picture of what lies beyond our doorstep.
"All bright galaxies within 20 million light years, including us, are organized in a 'Local Sheet' 34-million light years across and only 1.5-million light years thick," says McCall. "The Milky Way and Andromeda are encircled by twelve large galaxies arranged in a ring about 24-million light years across – this 'Council of Giants' stands in gravitational judgment of the Local Group by restricting its range of influence."
McCall says twelve of the fourteen giants in the Local Sheet, including the Milky Way and Andromeda, are "spiral galaxies" which have highly flattened disks in which stars are forming. The remaining two are more puffy "elliptical galaxies", whose stellar bulks were laid down long ago. Intriguingly, the two ellipticals sit on opposite sides of the Council. Winds expelled in the earliest phases of their development might have shepherded gas towards the Local Group, thereby helping to build the disks of the Milky Way and Andromeda.
McCall also examined how galaxies in the Council are spinning. He comments: "Thinking of a galaxy as a screw in a piece of wood, the direction of spin can be described as the direction the screw would move (in or out) if it were turned the same way as the galaxy rotates. Unexpectedly, the spin directions of Council giants are arranged around a small circle on the sky. This unusual alignment might have been set up by gravitational torques imposed by the Milky Way and Andromeda when the universe was smaller."
The boundary defined by the Council has led to insights about the conditions which led to the formation of the Milky Way. Most important, only a very small enhancement in the density of matter in the universe appears to have been required to produce the Local Group. To arrive at such an orderly arrangement as the Local Sheet and its Council, it seems that nearby galaxies must have developed within a pre-existing sheet-like foundation comprised primarily of dark matter.
"Recent surveys of the more distant universe have revealed that galaxies lie in sheets and filaments with large regions of empty space called voids in between," says McCall. "The geometry is like that of a sponge. What the new map reveals is that structure akin to that seen on large scales extends down to the smallest."
York University is helping to shape the global thinkers and thinking that will define tomorrow. York U's unwavering commitment to excellence reflects a rich diversity of perspectives and a strong sense of social responsibility that sets us apart. A York U degree empowers graduates to thrive in the world and achieve their life goals through a rigorous academic foundation balanced by real-world experiential education. As a globally recognized research centre, York U is fully engaged in the critical discussions that lead to innovative solutions to the most pressing local and global social challenges. York U's 11 faculties and 27 research centres are thinking bigger, broader and more globally, partnering with 288 leading universities worldwide. York U's community is strong − 55,000 students, 7,000 faculty and staff, and more than 250,000 alumni.
Media Contact: Robin Heron, Media Relations, York University, 416 736 2100 x22097/ [email protected]
Robin Heron | EurekAlert!
LIGO confirms RIT's breakthrough prediction of gravitational waves
12.02.2016 | Rochester Institute of Technology
Milestone in physics: gravitational waves detected with the laser system from LZH
12.02.2016 | Laser Zentrum Hannover e.V.
Today, plants and microorganisms are heavily used for the production of medicinal products. The production of biopharmaceuticals in plants, also referred to as “Molecular Pharming”, represents a continuously growing field of plant biotechnology. Preferred host organisms include yeast and crop plants, such as maize and potato – plants with high demands. With the help of a special algal strain, the research team of Prof. Ralph Bock at the Max Planck Institute of Molecular Plant Physiology in Potsdam strives to develop a more efficient and resource-saving system for the production of medicines and vaccines. They tested its practicality by synthesizing a component of a potential AIDS vaccine.
The use of plants and microorganisms to produce pharmaceuticals is nothing new. In 1982, bacteria were genetically modified to produce human insulin, a drug...
Atomic clock experts from the Physikalisch-Technische Bundesanstalt (PTB) are the first research group in the world to have built an optical single-ion clock which attains an accuracy which had only been predicted theoretically so far. Their optical ytterbium clock achieved a relative systematic measurement uncertainty of 3 E-18. The results have been published in the current issue of the scientific journal "Physical Review Letters".
Atomic clock experts from the Physikalisch-Technische Bundesanstalt (PTB) are the first research group in the world to have built an optical single-ion clock...
The University of Würzburg has two new space projects in the pipeline which are concerned with the observation of planets and autonomous fault correction aboard satellites. The German Federal Ministry of Economic Affairs and Energy funds the projects with around 1.6 million euros.
Detecting tornadoes that sweep across Mars. Discovering meteors that fall to Earth. Investigating strange lightning that flashes from Earth's atmosphere into...
Physicists from Saarland University and the ESPCI in Paris have shown how liquids on solid surfaces can be made to slide over the surface a bit like a bobsleigh on ice. The key is to apply a coating at the boundary between the liquid and the surface that induces the liquid to slip. This results in an increase in the average flow velocity of the liquid and its throughput. This was demonstrated by studying the behaviour of droplets on surfaces with different coatings as they evolved into the equilibrium state. The results could prove useful in optimizing industrial processes, such as the extrusion of plastics.
The study has been published in the respected academic journal PNAS (Proceedings of the National Academy of Sciences of the United States of America).
Exceeding critical temperature limits in the Southern Ocean may cause the collapse of ice sheets and a sharp rise in sea levels
A future warming of the Southern Ocean caused by rising greenhouse gas concentrations in the atmosphere may severely disrupt the stability of the West...
12.02.2016 | Event News
09.02.2016 | Event News
02.02.2016 | Event News
12.02.2016 | Physics and Astronomy
12.02.2016 | Life Sciences
12.02.2016 | Medical Engineering | http://www.innovations-report.com/html/reports/physics-astronomy/york-u-astronomer-maps-out-earth-s-place-in-the-universe-among-council-of-giants.html |
4.3125 | The bands of color on a resistor are a code that indicates the magnitude of the resistance of the resistor. There are four color bands identified by letter: A, B, C, and D, with a gap between the C and D bands so that you know which end is A. This particular resistor has a red A band, blue B band, green C band, and gold D band, but the bands can be different colors on different resistors. Based on the colors of the bands, it is possible to identify the type of resistor. the A and B bands represent significant digits; red is 2 and blue is 6. The C band indicates the multiplier, and green indicates 105. These three together indicate that this particular resistor is a 26,000 Ohm resistor. Finally, the D band indicates the tolerance, in this case 5%, as shown by the gold band. These terms will be explained over the course of this lesson.
Resistance and Ohm’s Law
When a potential difference is placed across a metal wire, a large current will flow through the wire. If the same potential difference is placed across a glass rod, almost no current will flow. The property that determines how much current will flow is called the resistance. Resistance is measured by finding the ratio of potential difference, V, to current flow, I.
When given in the form V=IR, this formula is known as Ohm's Law, after the man that discovered the relationship. The units of resistance can be determined using the units of the other terms in the equation, namely that the potential difference is in volts (J/C) and current in amperes (C/s):
The units for resistance have been given the name ohms and the abbreviation is the Greek letter omega, Ω. 1.00 Ω is the resistance that will allow 1.00 ampere of current to flow through the resistor when the potential difference is 1.00 volt. Most conductors have a constant resistance regardless of the potential difference; these are said to obey Ohm's Law.
There are two ways to control the current in a circuit. Since the current is directly proportional to the potential difference and inversely proportional to the resistance, you can increase the current in a circuit by increasing the potential or by decreasing the resistance.
Example Problem: A 50.0 V battery maintains current through a 20.0 Ω resistor. What is the current through the resistor?
Solution: I=VR=50.0 V20.0 Ω=2.50 amps
- Resistance is the property that determines the amount of current flow through a particular material.
V=IR is known as Ohm’s Law.
- The unit for resistance is the ohm, and it has the abbreviation Ω.
The following video covers Ohm's Law. Use this resource to answer the questions that follow.
- What happens to current flow when voltage is increased?
- What happens to current flow when resistance is increased?
This website contains instruction and guided practice for Ohm’s Law.
- If the potential stays the same and the resistance decreases, what happens to the current?
- stay the same
- If the resistance stays the same and the potential increases, what happens to the current?
- stay the same
- How much current can be pushed through a 30.0 Ω resistor by a 12.0 V battery?
- What voltage is required to push 4.00 A of current through a 32.0 Ω resistor?
- If a 6.00 volt battery will produce 0.300 A of current in a circuit, what is the resistance in the circuit? | http://www.ck12.org/book/CK-12-Physics-Concepts---Intermediate/r19/section/18.2/ |
4.0625 | |Part of a series on|
A communist party is a political party that advocates the application of the social and economic principles of communism through state policy. The name originates from the 1848 tract Manifesto of the Communist Party by Karl Marx and Friedrich Engels. According to Leninist theory, a Communist party is the vanguard party of the working class (Proletariat), whether ruling or non-ruling, but when such a party is in power in a specific country, the party is said to be the highest authority of the dictatorship of the proletariat. Vladimir Lenin's theories on the role of a Communist party were developed as the early 20th-century Russian social democracy divided into Bolshevik (meaning "of the majority") and Menshevik (meaning "of the minority") factions. Lenin, leader of the Bolsheviks, argued that a revolutionary party should be a small vanguard party with a centralized political command and a strict cadre policy; the Menshevik faction, however, argued that the party should be a broad-based mass movement. The Bolshevik party, which eventually became the Communist Party of the Soviet Union, took power in Russia after the October Revolution in 1917. With the creation of the Communist International, the Leninist concept of party building was copied by emerging Communist parties worldwide.
As the membership of a Communist party was to be limited to active cadres in Lenin's theory, there was a need for networks of separate organizations to mobilize mass support for the party. Typically, Communist parties have built up various front organizations whose membership is often open to non-Communists. In many countries the single most important front organization of the Communist parties has been its youth wing. During the time of the Communist International, the youth leagues were explicit Communist organizations, using the name 'Young Communist League'. Later the youth league concept was broadened in many countries, and names like 'Democratic Youth League' were adopted.
Some trade unions, student, women's, grifters, peasant's and cultural organizations have been connected to Communist parties. Traditionally, these mass organizations were often politically subordinated to the political leadership of the party. However, in many contemporary cases mass organizations founded by communists have acquired a certain degree of independence. In some cases mass organizations have outlived the Communist parties in question.
At the international level, the Communist International organized various international front organizations (linking national mass organizations with each other), such as the Young Communist International, Profintern, Krestintern, International Red Aid, Sportintern, etc.. These organizations were dissolved in the process of deconstruction of the Communist International. After the Second World War new international coordination bodies were created, such as the World Federation of Democratic Youth, International Union of Students, World Federation of Trade Unions, Women's International Democratic Federation and the World Peace Council.
Historically, in countries where Communist Parties were struggling to attain state power, the formation of wartime alliances with non-Communist parties and wartime groups was enacted (such as the National Liberation Front of Albania). Upon attaining state power these Fronts were often transformed into nominal (and usually electoral) "National" or "Fatherland" Fronts in which non-communist parties and organizations were given token representation (a practice known as Blockpartei), the most popular examples of these being the National Front of East Germany (as a historical example) and the United Front of the People's Republic of China (as a modern-day example). Other times the formation of such Fronts were undertaken without the participation of other parties, such as the Socialist Alliance of Working People of Yugoslavia and the National Front of Afghanistan, though the purpose was the same: to promote the Communist Party line to generally non-communist audiences and to mobilize them to carry out tasks within the country under the aegis of the Front.
A uniform naming scheme for Communist parties was adopted by the Communist International. All parties were required to use the name 'Communist Party of (name of country)', resulting in separate communist parties in some countries operating using (largely) homonymous party names (e.g. in India). Today, there are plenty of cases where the old sections of the Communist International have retained those names. In other cases names have been changed. Common causes for the shift in naming were either moves to avoid state repression or as measures to indicate a broader political acceptance.
A typical example of the latter was the renaming of various East European Communist parties after the Second World War, as a result of mergers with the local Social Democratic parties. New names in the post-war era included "Socialist Party", "Socialist Unity Party", "Popular Party", "Workers' Party" and "Party of Labour".
The naming conventions of Communist parties became more diverse as the international Communist movement was fragmented due to the Sino-Soviet split in the 1960s. Those who sided with China and Albania in their criticism of the Soviet leadership, often added words like 'Revolutionary' or 'Marxist-Leninist' to distinguish themselves from the pro-Soviet parties.
|Wikimedia Commons has media related to Communist Parties.|
- Harper, Douglas. "communism". Online Etymology Dictionary. Retrieved 2008-08-27.
- "The Chinese Communist Party". Council on Foreign Relations. Retrieved 25 February 2015.
- China's communist party members near 78 mln
- Domeinnaam niet ingeschakeld
- "Nieuws". PVDA. Retrieved 25 February 2015.
- One such example is the Swiss Party of Labour, which was founded in 1944 to substitute the illegalized Communist Party of Switzerland.
- Such mergers occurred in East Germany (Socialist Unity Party of Germany), Hungary (Hungarian Working People's Party), Poland (Polish United Workers Party) and Romania (Romanian Workers Party). | https://en.wikipedia.org/wiki/Communist_party |
4.09375 | This sentence diagramming worksheet focuses on adjectives, adverbs and articles.
Diagramming Sentences Worksheets
A sentence diagram is a way to graphically represent the structure of a sentence, showing how words in a sentence function and relate to each other. The printable practice worksheets below provide supplemental help in learning the basic concepts of sentence diagramming. Feel free to print them off and duplicate for home or classroom use.
It’s all about conjunctions in this diagramming sentences worksheet!
Time to diagram sentences with direct and indirect objects!
In this diagramming sentences worksheet, your student will practice with prepositional phrases.
There are a lot of compounds in this sentence diagram worksheet!
A helpful sentence diagramming guide for students to use at home or in the classroom.
Now it’s time to practice diagramming sentences!
Here’s a practice worksheet for your beginning sentence diagrammer that covers the subject and predicate.
If you’re looking for a basic sentence diagramming worksheet, this is it!
This worksheet focuses on diagramming complex sentences.
Compound predicates are featured in this worksheet on diagramming sentences.
Let’s diagram some compound sentences!
In this worksheet your student will diagram sentences with compound subjects.
This worksheet helps your student understand how to diagram helping verbs in a sentence.
This activity provides students practice diagramming infinitives.
This activity provides students practice diagramming intensive pronouns.
This printable activity provides students practice diagramming interjections.
What could be better than a sentence diagram worksheet on interrogatives?
Object complements are the main attraction in this sentence diagramming worksheet.
This activity provides students practice diagramming reflexive pronouns. | http://www.k12reader.com/subject/grammar/sentence-structure/diagramming-sentences/ |
4.21875 | - The Life of a Glacier
- About Glaciers
- Glacier Photo Gallery
- Science and Data Resources
- Further Reading
- How to Cite
What types of glaciers are there?
These glaciers develop in high mountainous regions, often flowing out of icefields that span several peaks or even a mountain range. The largest mountain glaciers are found in Arctic Canada, Alaska, the Andes in South America, and the Himalaya in Asia.
Commonly originating from mountain glaciers or icefields, these glaciers spill down valleys, looking much like giant tongues. Valley glaciers may be very long, often flowing down beyond the snow line, sometimes reaching sea level.
As the name implies, these are valley glaciers that flow far enough to reach out into the sea. Tidewater glaciers are responsible for calving numerous small icebergs, which although not as imposing as Antarctic icebergs, can still pose problems for shipping lanes.
Piedmont glaciers occur when steep valley glaciers spill into relatively flat plains, where they spread out into bulb-like lobes. Malaspina Glacier in Alaska is one of the most famous examples of this type of glacier, and is the largest piedmont glacier in the world. Spilling out of the Seward Icefield, Malaspina Glacier covers about 3,900 square kilometers (1,500 square miles) as it spreads across the coastal plain.
When a major valley glacier system retreats and thins, sometimes the tributary glaciers are left in smaller valleys high above the shrunken central glacier surface. These are called hanging glaciers. If the entire system has melted and disappeared, the empty high valleys are called hanging valleys.
These small, steep glaciers cling to high mountainsides. Like cirque glaciers, they are often wider than they are long. Ice aprons are common in the Alps and in New Zealand, where they often cause avalanches due to the steep inclines they occupy.
Rock glaciers sometimes form when slow-moving glacial ice is covered by debris. They are often found in steep-sided valleys, where rocks and soil fall from the valley walls onto the ice. Rock glaciers may also form when frozen ground creeps downslope.
Ice shelves occur when ice sheets extend over the sea and float on the water. They range from a few hundred meters to over 1 kilometer (0.62 mile) in thickness. Ice shelves surround most of the Antarctic continent.
Ice caps are miniature ice sheets, covering less than 50,000 square kilometers (19,305 square miles). They form primarily in polar and sub-polar regions that are relatively flat and high in elevation.
Ice streams are large ribbon-like glaciers set within an ice sheet—they are bordered by ice that is flowing more slowly, rather than by rock outcrop or mountain ranges. These huge masses of flowing ice are often very sensitive to changes such as the loss of ice shelves at their terminus or changing amounts of water flowing beneath them. The Antarctic ice sheet has many ice streams.
Found now only in Antarctica and Greenland, ice sheets are enormous continental masses of glacial ice and snow expanding over 50,000 square kilometers (19,305 square miles). The ice sheet on Antarctica is over 4.7 kilometers (3 miles) thick in some areas, covering nearly all of the land features except the Transantarctic Mountains, which protrude above the ice. Another example is the Greenland Ice Sheet. In the past ice ages, huge ice sheets also covered most of Canada (the Laurentide Ice Sheet) and Scandinavia (the Scandinavian Ice Sheet), but these have now disappeared, leaving only a few ice caps and mountain glaciers behind.
NSIDC's Glacier Glossary - Search and browse terms related to glaciers in NSIDC's comprehensive cryospheric glossary.
NSIDC Glacier Photograph Collection - NSIDC archives a Glacier Photograph Collection of historical photos, which includes both aerial and terrestrial photos for the 1880s to 1975. The photos are primarily of Alaskan glaciers, but coverage also includes the Pacific Northwest and Europe. | https://nsidc.org/cryosphere/glaciers/questions/types.html |
4.0625 | the The PhET Project and
This PhET "Gold Star Winner" is an instructional unit on the topic of Waves, created by a high school teacher. It was designed to be used with interactive simulations developed by PhET, the Physics Education Technology project. Included are detailed lessons for integrating labs, simulations, demonstrations, and concept questions to introduce students to properties and behaviors of waves. Specific topics include frequency and wavelength, sound, the wave nature of light, geometric optics, resonance, wave interference, Doppler Effect, refraction, thin lenses, wave addition, and more. Activities are aligned to AAAS Benchmarks.
Editor's Note:This could be a very useful resource for teachers in grades 8-12, allowing them to quickly customize a module which meets new content standards on Waves, outlined in the NextGen Science Framework. In its entirety, the unit is 4 weeks in duration. However, teachers of Grades 8-9 physical science could easily pull out 4-5 lessons addressing fundamental wave properties and the basics of refraction/reflection. All lessons include objectives and teaching tips, plus "clicker" or warm-up questions, worksheets, and unit tests with answer keys.
Metadata instance created
April 17, 2008
by Caroline Hall
October 1, 2012
by Caroline Hall
Last Update when Cataloged:
March 31, 2008
AAAS Benchmark Alignments (2008 Version)
4. The Physical Setting
6-8: 4F/M2. Something can be "seen" when light waves emitted or reflected by it enter the eye—just as something can be "heard" when sound waves from it enter the ear.
6-8: 4F/M4. Vibrations in materials set up wavelike disturbances that spread away from the source. Sound and earthquake waves are examples. These and other waves move at different speeds in different materials.
6-8: 4F/M5. Human eyes respond to only a narrow range of wavelengths of electromagnetic waves-visible light. Differences of wavelength within that range are perceived as differences of color.
6-8: 4F/M6. Light acts like a wave in many ways. And waves can explain how light behaves.
6-8: 4F/M7. Wave behavior can be described in terms of how fast the disturbance spreads, and in terms of the distance between successive peaks of the disturbance (the wavelength).
9-12: 4F/H5ab. The observed wavelength of a wave depends upon the relative motion of the source and the observer. If either is moving toward the other, the observed wavelength is shorter; if either is moving away, the wavelength is longer.
9-12: 4F/H6ab. Waves can superpose on one another, bend around corners, reflect off surfaces, be absorbed by materials they enter, and change direction when entering a new material. All these effects vary with wavelength.
9-12: 4F/H6c. The energy of waves (like any form of energy) can be changed into other forms of energy.
11. Common Themes
6-8: 11B/M4. Simulations are often useful in modeling events and processes.
9-12: 11B/H3. The usefulness of a model can be tested by comparing its predictions to actual observations in the real world. But a close match does not necessarily mean that other models would not work equally well or better.
6-8: 11D/M3. Natural phenomena often involve sizes, durations, and speeds that are extremely small or extremely large. These phenomena may be difficult to appreciate because they involve magnitudes far outside human experience.
Common Core State Standards for Mathematics Alignments
High School — Functions (9-12)
Interpreting Functions (9-12)
F-IF.4 For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features given a verbal description of the relationship.?
F-IF.5 Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes.?
F-IF.6 Calculate and interpret the average rate of change of a function (presented symbolically or as a table) over a specified interval. Estimate the rate of change from a graph.
F-IF.7.a Graph linear and quadratic functions and show intercepts, maxima, and minima.
Building Functions (9-12)
F-BF.3 Identify the effect on the graph of replacing f(x) by f(x) + k, k f(x), f(kx), and f(x + k) for specific values of k (both positive and negative); find the value of k given the graphs. Experiment with cases and illustrate an explanation of the effects on the graph using technology. Include recognizing even and odd functions from their graphs and algebraic expressions for them.
Trigonometric Functions (9-12)
F-TF.5 Choose trigonometric functions to model periodic phenomena with specified amplitude, frequency, and midline.?
This resource is part of 2 Physics Front Topical Units.
Topic: Wave Energy Unit Title: Teaching About Waves and Wave Energy
This is a unique, standards-based unit of instruction on Waves created by a high school teacher to be used with PhET interactive simulations on wave motion. It includes comprehensive lesson plans, lecture presentations, and assessments with answer keys. Be sure not to miss the "Clicker Questions" -- great introductory material.
Topic: Wave Energy Unit Title: Wave Properties: Frequency, Amplitude, Period, Phase
This exemplary unit of instruction was developed by a high school physics teacher to be used with PhET simulations. It includes six complete lesson plans that explore wave properties, the physics of sound, Fourier analysis, and wave phenomena such as reflection and superposition. Most of the lessons require that the simulation be open on a browser while students work. Don't miss the Clicker Questions, which can be readily downloaded for classroom use. Entire unit will take 2-3 weeks, but components may be pulled out separately. Can be used in a Physics First course, with teacher adaptation.
%0 Electronic Source %A The PhET Project, %A Loeblein, Trish %D March 31, 2008 %T PhET Teacher Ideas & Activities: Wave Unit %V 2016 %N 6 February 2016 %8 March 31, 2008 %9 application/pdf %U http://phet.colorado.edu/en/contributions/view/3023
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
This is the full collection of teacher-created lesson plans and labs designed to be used with specific PhET simulations. Each resource has been approved by the PhET project, and may be freely downloaded. | http://www.compadre.org/Precollege/items/detail.cfm?ID=6883 |
4.15625 | A biological template ramps up electrode performance and scales down size.
More than half the weight and size of today’s batteries comes from supporting materials that contribute nothing to storing energy. Now researchers have demonstrated that genetically engineered viruses can assemble active battery materials into a compact, regular structure, to make an ultra-thin, transparent battery electrode that stores nearly three times as much energy as those in today’s lithium-ion batteries. It is the first step toward high-capacity, self-assembling batteries.
Applications could include high-energy batteries laminated invisibly to flat screens in cell phones and laptops or conformed to fit hearing aids. The same assembly technique could also lead to more effective catalysts and solar panels, according to the MIT researchers who developed the technology, by making it possible to finely control the positions of inorganic materials.
“Most of it was done through genetic manipulation – giving an organism that wouldn’t normally make battery electrodes the information to make a battery electrode, and to assemble it into a device,” says Angela Belcher, a researcher on the project and an MIT professor of materials science and engineering and biological engineering. “My dream is to have a DNA sequence that codes for the synthesis of materials, and then out of a beaker to pull out a device. And I think this is a big step along that path.”
The researchers, in work reported online this week in Science, used M13 viruses to make the positive electrode of a lithium-ion battery, which they tested with a conventional negative electrode. The virus is made of proteins, most of which coil to form a long, thin cylinder. By adding sequences of nucleotides to the virus’ DNA, the researchers directed these proteins to form with an additional amino acid that binds to cobalt ions. The viruses with these new proteins then coat themselves with cobalt ions in a solution, which eventually leads, after reactions with water, to cobalt oxide, an advanced battery material with much higher storage capacity than the carbon-based materials now used in lithium-ion batteries.
To make an electrode, the researchers first dip a polymer electrolyte into a solution of engineered viruses. The viruses assemble into a uniform coating on the electrolyte. This coated electrolyte is then dipped into a solution containing battery materials. The viruses arrange these materials into an ordered crystal structure good for high-density batteries.
[Click here for an illustration of the battery-forming process.]
These electrodes proved to have twice the capacity of carbon-based ones. To improve this further, the researchers again turned to genetic engineering. While keeping the genetic code for the cobalt assembly, they added an additional strand of DNA that produces virus proteins that bind to gold. The viruses then assembled as nanowires composed of both cobalt oxide and gold particles – and the resulting electrodes stored 30 percent more energy.
Using viruses to assemble inorganic materials has several advantages, says Daniel Morse, professor of molecular genetics and biochemistry at the University of California, Santa Barbara. First, the placement of the proteins, and the cobalt and gold that bind to them, is precise. The virus can also reproduce quickly, providing plenty of starting material, suggesting that this is manufacturing technique that could quickly scale up. And this assembly method does not require the costly processes now used to make battery materials.
“You could do this at the industrial level really quickly,” says Brent Iverson, professor of organic chemistry and biochemistry at the University of Texas at Austin. “I can’t imagine a way to template or scaffold nanoparticles any cheaper.”
Yet-Ming Chiang, materials science and engineering professor at MIT and one of Belcher’s collaborators, says that, while small batteries designed for specific applications could be made using this process within a couple of years, much work remains to be done. For example, cobalt oxide might not be the best material, so the researchers will be engineering viruses to bind to other materials.
One of the ways they have done this in the past is using a process called “directed evolution.” They combine collections of viruses with millions of random variations in a vial containing a piece of the material they want the virus to bind to. Some of the viruses happen to have proteins that bind to the material. Isolating these viruses is a simple process of washing off the piece of material –only those viruses bound to the material remain. These can then be allowed to reproduce. After a few rounds of binding and washing, only viruses with the highest affinity for the material remain.
The researchers also want to make viruses that assemble the negative electrode as well. They would then grow the positive and negative electrodes on opposite sides of a self-assembling polymer electrolyte developed by Paula Hammond*, another major contributor to the project. This would create self-assembled batteries, not just electrodes. Another goal is to make “interdigitated” batteries in which negative and positive electrode materials alternate, like the tines of two combs pushed together – this could pack in more energy and lead to batteries that deliver that energy in more powerful bursts.
And batteries could be just the beginning. Since the viruses have different proteins at different locations – one protein in the center and others at the ends – the researchers can create viruses that bind to one material in the middle and different materials on the ends. Already, Belcher’s group has produced viruses that coat themselves with semiconductors and then attach themselves at the ends to gold electrodes, which could lead to working transistors.
“If you can make batteries that truly are effective this way, it’s just mind-boggling what the applications could be,” Iverson says.
*Correction: The virus-battery work was the result of a collaboration between researchers at MIT. The original article mentions Angela Belcher and Yet-Ming Chiang. An important part of this work was the development of a self-assembling polymer electrolyte by Paula Hammond, MIT chemical engineering professor.
Home page image courtesy of Angela Belcher, MIT. | https://www.technologyreview.com/s/405635/virus-assembled-batteries/ |