score
float64
4
5.34
text
stringlengths
256
572k
url
stringlengths
15
373
4
The answer to any math problem depends on upon the question being asked. In most math problems, one needs to determine a missing variable. For instance, if a problem reads as 2+3 = , one needs to figure out what the number after the equals sign should be.Know More In other cases, one may see a number after the equals sign but a letter or two letters in the middle of the problem. That looks like the following: 2+a=5. In questions like this one, the answer is the number that accurately replaces the letter. Many math problems are story problems. In order to solve a story problem, one must create an equation based on the information presented in the story, and they must solve for the variable in that equation.Learn more about Arithmetic The numbers to add in an addition problem are called addends, summands or terms, while the answer to the problem is the sum. In the number sentence a+b=c, a and b are addends, while c is the sum.Full Answer > A net change in math is the total of all of the changes completed throughout the solving of a problem. The net change is reflected in a numerical amount and can be positive, negative or at zero.Full Answer > One funny math problem is: "I am an odd number. Take away one letter and I become even. What number am I?" The answer is seven. The humor in this problem becomes apparent when the solver recognizes that the mathematics of odd and even numbers has been mixed with wordplay.Full Answer > The math answers to all problems can not be contained in a succinct answer. Each math problem or equation has its own answer and must be individually solved. There are numerous online resources to assist with solving any given problem.Full Answer >
http://www.ask.com/math/answer-math-problem-4a4b7fc070e47712
4.03125
How to define circumscribed and inscribed circles and polygons relative to each other. How to calculate the measure of an inscribed angle. How to define a polygon, how to distinguish between concave and convex polygons, how to name polygons. How to define the apothem and center of a polygon; how to divide a regular polygon into congruent triangles. Naming polygons, classifying triangles, and classifying quadrilaterals How to determine the number of diagonals in a polygon. How to find the sum of the exterior angles in a polygon and find the measure of one exterior angle in an equiangular polygon. How to derive the formula to find the sum of angles in any polygon. How to derive the formula to calculate the area of a regular polygon. How to find the measure of one angle in any equiangular or regular polygon. How to convert between length and area ratios of similar polygons How to calculate the surface area of any pyramid, emphasizing regular polygons as bases. How to prove that an angle inscribed in a semicircle is a right angle; how to solve for arcs and angles formed by a chord drawn to a point of tangency. How to identify if two figures are similar. How to find the length of tangent segments drawn to a circle from the same point. How to calculate the area between a square and an inscribed circle. How to find the volume of any pyramid. How to prove that opposite angles in a cyclic quadrilateral are congruent; how to prove that parallel lines create congruent arcs in a circle. How to determine if two triangles in a circle are similar and how to prove that three similar triangles exist in a right triangle with an altitude. How the vocabulary word vertex applies to different objects in Geometry.
https://www.brightstorm.com/tag/inscribed-polygon/
4
This is a series of 10 short videos, hosted by National Science Foundation, each featuring scientists, research, and green technologies. The overall goal of this series is to encourage people to ask questions and look beyond fossil fuels for innovative solutions to our ever-growing energy needs. This narrated slide show gives a brief overview of coral biology and how coral reefs are in danger from pollution, ocean temperature change, ocean acidification, and climate change. In addition, scientists discuss how taking cores from corals yields information on past changes in ocean temperature. This video shows where and how ice cores are extracted from the West Antarctic Ice Sheet (WAIS), cut, packaged, flown to the ice core storage facility in Denver, further sliced into samples, and shipped to labs all over the world where scientists use them to study indicators of climate change from the past. C-Learn is a simplified version of the C-ROADS simulator. Its primary purpose is to help users understand the long-term climate effects (CO2 concentrations, global temperature, sea level rise) of various customized actions to reduce fossil fuel CO2 emissions, reduce deforestation, and grow more trees. Students can ask multiple, customized what-if questions and understand why the system reacts as it does. In this activity, students use Google Earth and information from several websites to investigate some of the consequences of climate change in polar regions, including the shrinking of the ice cap at the North Pole, disintegration of ice shelves, melting of Greenland, opening of shipping routes, effects on polar bears, and possible secondary effects on climate in other regions due to changes in ocean currents. Students learn to use satellite and aerial imagery, maps, graphs, and statistics to interpret trends accompanying changes in the Earth system.
https://www.climate.gov/teaching/resources/education/high-school-9-12?keywords=&page=8
4.125
Findings by Scripps scientists cast new light on undersea volcanoes Study in Science may help change the broad understanding of how they are formed Researchers at Scripps Institution of Oceanography at the University of California, San Diego, have produced new findings that may help alter commonly held beliefs about how chains of undersea mountains formed by volcanoes, or "seamounts," are created. Such mountains can rise thousands of feet off the ocean floor in chains that span thousands of miles across the ocean. Since the mid-20th century, the belief that the earth's surface is covered by large, shifting plates---a concept known as plate tectonics--has shaped conventional thinking on how seamount chains develop. Textbooks have taught students that seamount patterns are shaped by changes in the direction and motion of the plates. As a plate moves, stationary "hot spots" below the plate produce magma that forms a series of volcanoes in the direction of the plate motion. Now, Anthony Koppers and Hubert Staudigel of Scripps have published a study that counters the idea that hot spots exist in fixed positions. The paper in the Feb. 11 issue of Science shows that hot spot chains can change direction as a result of processes unrelated to plate motion. The new research adds further to current scientific debates on hot spots and provides information for a better understanding of the dynamics of the earth's interior. To investigate this phenomenon, Staudigel led a research cruise in 1999 aboard the Scripps research vessel Melville to the Pacific Ocean's Gilbert Ridge and Tokelau Seamounts near the international date line, a few hundred miles north of American Samoa and just south of the Marshall Islands. Gilbert and Tokelau are the only seamount trails in the Pacific that bend in sharp, 60-degree angles--comparable in appearance to hockey sticks--similar to the bending pattern of the Hawaii-Emperor seamount chain (which includes the Hawaiian Islands). Assuming that these three chains were created by fixed hot spots, the bends in the Gilbert Ridge and Tokelau Seamounts should have been created at roughly the same time period as the bend in the Hawaii-Emperor chain, the conventional theory holds. Koppers, Staudigel and a team of student researchers aboard the Melville spent six weeks exploring the ocean floor at Gilbert and Tokelau. They used deep-sea dredges to collect volcanic rock samples from the area. For the next several years, Koppers used laboratory instruments to analyze the composition of the rock samples and calculate their ages. "It was quite a surprise that we found the Gilbert and Tokelau seamount bends to have completely different ages than we expected," said Koppers, a researcher at the Cecil H. and Ida M. Green Institute of Geophysics and Planetary Physics at Scripps. "We certainly didn't expect that they were 10 and 20 million years older than previously thought." Instead of forming 47 million years ago, as did the Hawaiian-Emperor bend, the Gilbert chain was found to be 67 million years old and the Tokelau 57 million years old. "I think this really hammers it in that the origin of the alignment of these seamount chains may be much more complicated than we previously believed, or the alignment may not have anything to do with plate motion changes," said Staudigel. Although they do not have positive proof as yet, Koppers and Staudigel speculate that local stretching of the plate may allow magma to rise to the surface or that hot spots themselves might move. Together with plate motion, these alternate processes may be responsible for the resulting pattern of seamounts. Koppers and Staudigel will go to sea again next year to seek additional clues to the hot spot and seamount mysteries. "Seamount trails are thousands of kilometers long and even if we are out collecting for several weeks, we still only cover a limited area," said Koppers. "One of the things holding us back in developing a new theory is that the oceans are humongous and our database is currently very small we are trying to understand a very big concept." Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009 Published on PsychCentral.com. All rights reserved.
http://psychcentral.com/news/archives/2005-02/uoc--fbs021005.html
4.25
Orbit of the Moon - Not to be confused with Lunar orbit (the orbit of an object around the Moon). |Diagram of the Earth–Moon system| The Moon orbits Earth in the prograde direction and completes one revolution relative to the stars in approximately 27.322 days (a sidereal month). Earth and the Moon orbit about their barycentre (common center of mass), which lies about 4600 km from Earth's center (about three quarters of the radius of Earth). On average, the Moon is at a distance of about 385000 km from Earth's center, which corresponds to about 60 Earth radii. With a mean orbital velocity of 1.022 km/s, the Moon moves relative to the stars each hour by an amount roughly equal to its angular diameter, or by about 0.5°. The Moon differs from most satellites of other planets in that its orbit is close to the plane of the ecliptic, and not to Earth's equatorial plane. The plane of the lunar orbit is inclined to the ecliptic by about 5.1°, whereas the Moon's spin axis is inclined by only 1.5°. The orbit of the Moon is distinctly elliptical, with an average eccentricity of 0.0549. The non-circular form of the lunar orbit causes variations in the Moon's angular speed and apparent size as it moves towards and away from an observer on Earth. The mean angular movement relative to an imaginary observer at the barycentre is ° per day to the east (Julian Day 2000.0 rate). 13.176 The Moon's elongation is its angular distance east of the Sun at any time. At new moon, it is zero and the Moon is said to be in conjunction. At full moon, the elongation is 180° and it is said to be in opposition. In both cases, the Moon is in syzygy, that is, the Sun, Moon and Earth are nearly aligned. When elongation is either 90° or 270°, the Moon is said to be in quadrature. The orientation of the orbit is not fixed in space, but rotates over time. This orbital precession is also called apsidal precession and is the rotation of the Moon's orbit within the orbital plane, i.e. the axes of the ellipse change direction. The Moon's major axis – the longest diameter of the orbit, joining its nearest and farthest points, the perigee and apogee, respectively – makes one complete revolution every 8.85 Earth years, or 3,232.6054 days, as it rotates slowly in the same direction as the Moon itself (direct motion). The Moon's apsidal precession is distinct from, and should not be confused with its axial precession. The mean inclination of the lunar orbit to the ecliptic plane is 5.145°. The rotational axis of the Moon is also not perpendicular to its orbital plane, so the lunar equator is not in the plane of its orbit, but is inclined to it by a constant value of 6.688° (this is the obliquity). This does not mean that, as a result of the precession of the Moon's orbital plane, the angle between the lunar equator and the ecliptic would vary between the sum (11.833°) and difference (1.543°) of these two angles, but, as was discovered by Jacques Cassini in 1722, the rotational axis of the Moon precesses with the same rate as its orbital plane, but is 180° out of phase (see Cassini's Laws). Therefore, the angle between the ecliptic and the lunar equator is always 1.543°, even though the rotational axis of the Moon is not fixed with respect to the stars. Because of the inclination of the moon's orbit, the moon is above the horizon at the North and South Pole for almost two weeks every month, even though the sun is below the horizon for six months at a time. The period from moonrise to moonrise at the poles is quite close to the sidereal period, or 27.3 days. When the sun is the furthest below the horizon (mid winter), the moon will be full when it is at its highest point. The moon's light is used by zooplankton in the Arctic when the sun is below the horizon for months and must have been helpful to the animals that lived in Arctic and Antarctic regions when the climate was warmer. The nodes are points at which the Moon's orbit crosses the ecliptic. The Moon crosses the same node every 27.2122 days, an interval called the draconic or draconitic month. The line of nodes, the intersection between the two respective planes, has a retrograde motion: for an observer on Earth it rotates westward along the ecliptic with a period of 18.60 years, or 19.3549° per year. When viewed from celestial north, the nodes move clockwise around Earth, opposite to Earth's own spin and its revolution around the Sun. Lunar and solar eclipses can occur when the nodes align with the Sun, roughly every 173.3 days. Lunar orbit inclination also determines eclipses; shadows cross when nodes coincide with full and new moon, when the Sun, Earth, and Moon align in three dimensions. Every 18.6 years, the angle between the moon's orbit and the earth's equator reaches a maximum of 28°36′ (the sum of the Earth's inclination 23°27′ and the Moon's inclination 5°09′). This is called major lunar standstill. Around this time, the moon's latitude will vary from −28°36′ to +28°36′. 9.3 years later, the angle between the moon's orbit and the earth's equator reaches its minimum, 18°20′. This is called a minor lunar standstill. When the inclination of the moon's orbit to the earth's equator is at its minimum of 18°20′, the centre of the moon's disk will be above the horizon every day as far north and as far south as 90°−18°20' or 71°40' latitude, whereas when the inclination is at its maximum of 28°36' the centre of the moon's disk will only be above the horizon every day for latitudes less than 90°−28°36' or 61°24'. At higher latitudes there will be a period of at least a day each month when the moon does not rise, but there will also be a period of at least a day each month during which the moon does not set. This is similar to the behavior of the sun, but with a period of 27.3 days instead of 365 days. Note that a point on the moon can actually be visible when it is below the horizon by about 34 arc minutes, due to refraction (see Sunrise). Scale model of the Earth–Moon system: Each pixel represents 500 km (310 mi). Sizes and distances are to scale. History of observations and measurements About 3,000 years ago, the Babylonians were the first human civilization to keep a consistent record of lunar observations. Clay tablets from that period, which have been found over the territory of present-day Iraq, are inscribed with cuneiform writing recording the times and dates of moonrises and moonsets, the stars that the Moon passed close by, and the time differences between rising and setting of both the Sun and the Moon around the time of the full moon. Babylonian astronomy discovered the three main periods of the Moon's motion and used data analysis to build lunar calendars that extended well into the future. This use of detailed, systematic observations to make predictions based on experimental data may be classified as the first scientific study in human history. However, the Babylonians seem to have lacked any geometrical or physical interpretation of their data, and they could not predict future lunar eclipses (although "warnings" were issued before likely eclipse times). Ancient Greek astronomers were the first to introduce and analyze mathematical models of the motion of objects in the sky. Ptolemy described lunar motion by using a well-defined geometric model of epicycles and evection. |Sidereal month||66227.321||with respect to the distant stars (13.36874634 passes per solar orbit)| |Synodic month||58929.530||with respect to the Sun (phases of the Moon, 12.36874634 passes per solar orbit)| |Tropical month||58227.321||with respect to the vernal point (precesses in ~26,000 years)| |Anomalistic month||55027.554||with respect to the perigee (precesses in 3232.6054 days = 8.850578 years)| |Draconic month||22127.212||with respect to the ascending node (precesses in 6793.4765 days = 18.5996 years)| There are several different periods associated with the lunar orbit. The sidereal month is the time it takes to make one complete orbit around Earth with respect to the fixed stars. It is about 27.32 days. The synodic month is the time it takes the Moon to reach the same visual phase. This varies notably throughout the year, but averages around 29.53 days. The synodic period is longer than the sidereal period because the Earth–Moon system moves in its orbit around the Sun during each sidereal month, hence a longer period is required to achieve a similar alignment of Earth, the Sun, and the Moon. The anomalistic month is the time between perigees and is about 27.55 days. The Earth–Moon separation determines the strength of the lunar tide raising force. The draconic month is the time from ascending node to ascending node. The time between two successive passes of the same ecliptic longitude is called the tropical month. The latter three periods are slightly different from the sidereal month. The average length of a calendar month (a twelfth of a year) is about 30.4 days. This is not a lunar period, though the calendar month is historically related to the visible lunar phase. The gravitational attraction that the Moon exerts on Earth is the cause of tides in the sea; the Sun has a lesser tidal influence. If Earth had a global ocean of uniform depth, the Moon would act to deform both the solid Earth (by a small amount) and the ocean in the shape of an ellipsoid with the high points roughly beneath the Moon and on the opposite side of Earth. However, because of the presence of the continents, Earth's much faster rotation and varying ocean depths, this simplistic visualisation does not happen. Although the tidal flow period is generally synchronized to the Moon's orbit around Earth, its relative timing varies greatly. In some places on Earth, there is only one high tide per day, whereas others have four, though this is somewhat rare. The notional tidal bulges are carried ahead of the Earth–Moon axis by the continents as a result of Earth's rotation. The eccentric mass of each bulge exerts a small amount of gravitational attraction on the Moon, with the bulge on the side of Earth closest to the Moon pulling in a direction slightly forward along the Moon's orbit (because Earth's rotation has carried the bulge forward). The bulge on the side furthest from the Moon has the opposite effect, but because the gravitational attraction varies inversely with the square of distance, the effect is stronger for the near-side bulge. As a result, some of Earth's angular (or rotational) momentum is gradually being transferred to the rotation of the Earth–Moon pair around their mutual centre of mass, called the barycentre. This slightly faster rotation causes the Earth–Moon distance to increase at approximately 38 millimetres per year. Conservation of angular momentum means that Earth's axial rotation is gradually slowing, and because of this its day lengthens by approximately 23 microseconds every year (excluding glacial rebound). Both figures are valid only for the current configuration of the continents. Tidal rhythmites from 620 million years ago show that, over hundreds of millions of years, the Moon receded at an average rate of 22 millimetres per year and the day lengthened at an average rate of 12 microseconds per year, both about half of their current values. See tidal acceleration for a more detailed description and references. The Moon is gradually receding from Earth into a higher orbit, and calculations suggest that this would continue for about fifty billion years. By that time, Earth and the Moon would be in a mutual spin–orbit resonance or tidal locking, in which the Moon will orbit Earth in about 47 days (currently 27 days), and both the Moon and Earth would rotate around their axes in the same time, always facing each other with the same side. This has already happened to the Moon—the same side always faces Earth and is also slowly happening to Earth. However, the slowdown of Earth's rotation is not occurring fast enough for the rotation to lengthen to a month before other effects change the situation: approximately 2.3 billion years from now, the increase of the Sun's radiation will have caused Earth's oceans to evaporate, removing the bulk of the tidal friction and acceleration. The Moon is in synchronous rotation, meaning that it keeps the same face toward Earth at all times. This synchronous rotation is only true on average, because the Moon's orbit has a definite eccentricity. As a result, the angular velocity of the Moon varies as it orbits Earth and hence is not always equal to the Moon's rotational velocity. When the Moon is at its perigee, its rotation is slower than its orbital motion, and this allows us to see up to eight degrees of longitude of its eastern (right) far side. Conversely, when the Moon reaches its apogee, its rotation is faster than its orbital motion and this reveals eight degrees of longitude of its western (left) far side. This is referred to as longitudinal libration. Because the lunar orbit is also inclined to Earth's ecliptic plane by 5.1°, the rotational axis of the Moon seems to rotate towards and away from Earth during one complete orbit. This is referred to as latitudinal libration, which allows one to see almost 7° of latitude beyond the pole on the far side. Finally, because the Moon is only about 60 Earth radii away from Earth's centre of mass, an observer at the equator who observes the Moon throughout the night moves laterally by one Earth diameter. This gives rise to a diurnal libration, which allows one to view an additional one degree's worth of lunar longitude. For the same reason, observers at both of Earth's geographical poles would be able to see one additional degree's worth of libration in latitude. Path of Earth and Moon around Sun When viewed from the north celestial pole, i.e. from the star Polaris, the Moon orbits Earth anticlockwise and Earth orbits the Sun anticlockwise, and the Moon and Earth rotate on their own axes anticlockwise. The right-hand rule can be used to indicate the direction of the angular velocity. If the thumb of the right hand points to the north celestial pole, its fingers curl in the direction that the Moon orbits Earth, Earth orbits the Sun, and the Moon and Earth rotate on their own axes. In representations of the Solar System, it is common to draw the trajectory of Earth from the point of view of the Sun, and the trajectory of the Moon from the point of view of Earth. This could give the impression that the Moon orbits Earth in such a way that sometimes it goes backwards when viewed from the Sun's perspective. Because the orbital velocity of the Moon around Earth (1 km/s) is small compared to the orbital velocity of Earth about the Sun (30 km/s), this never happens. There are no rearward loops in the Moon's solar orbit. Considering the Earth–Moon system as a binary planet, its centre of gravity is within Earth, about 4,624 km from its centre or 72.6% of its radius. This centre of gravity remains in-line towards the Moon as Earth completes its diurnal rotation. It is this mutual centre of gravity that defines the path of the Earth–Moon system in the solar orbit. Consequently, Earth's centre veers inside and outside the orbital path during each synodic month as the Moon moves in the opposite direction. Unlike most moons in the Solar System, the trajectory of the Moon around the Sun is very similar to that of Earth. The Sun's gravitational effect on the Moon is more than twice that of Earth's on the Moon; consequently, the Moon's trajectory is always convex (as seen when looking Sunward at the entire Sun–Earth–Moon system from a great distance outside Earth–Moon solar orbit), and is nowhere concave (from the same perspective) or looped. That is, the region enclosed by the Moon's orbit of the Sun is a convex set. - The geometric mean distance in the orbit (of ELP) - M. Chapront-Touzé; J. Chapront (1983). "The lunar ephemeris ELP-2000". Astronomy & Astrophysics 124: 54. Bibcode:1983A&A...124...50C. - The constant in the ELP expressions for the distance, which is the mean distance averaged over time - M. Chapront-Touzé; J. Chapront (1988). "ELP2000-85: a semi-analytical lunar ephemeris adequate for historical times". Astronomy & Astrophysics 190: 351. Bibcode:1988A&A...190..342C. - This often quoted value for the mean distance is actually the inverse of the mean of the inverse of the distance, which is not the same as the mean distance itself. - Jean Meeus, Mathematical astronomy morsels (Richmond, VA: Willmann-Bell, 1997) 11–12. - Lang, Kenneth R. (2011), The Cambridge Guide to the Solar System, 2nd ed., Cambridge University Press. - "Moon Fact Sheet". NASA. Retrieved 2014-01-08. - Martin C. Gutzwiller (1998). "Moon-Earth-Sun: The oldest three-body problem". Reviews of Modern Physics 70 (2): 589–639. Bibcode:1998RvMP...70..589G. doi:10.1103/RevModPhys.70.589. - "Moonlight helps plankton escape predators during Arctic winters". New Scientist. Jan 16, 2016. - The periods are calculated from orbital elements, using the rate of change of quantities at the instant J2000. The J2000 rate of change equals the coefficient of the first-degree term of VSOP polynomials. In the original VSOP87 elements, the units are arcseconds(”) and Julian centuries. There are 1,296,000” in a circle, 36525 days in a Julian century. The sidereal month is the time of a revolution of longitude λ with respect to the fixed J2000 equinox. VSOP87 gives 1732559343.7306” or 1336.8513455 revolutions in 36525 days–27.321661547 days per revolution. The tropical month is similar, but the longitude for the equinox of date is used. For the anomalistic year, the mean anomaly (λ-ω) is used (equinox does not matter). For the draconic month, (λ-Ω) is used. For the synodic month, the sidereal period of the mean Sun (or Earth) and the Moon. The period would be 1/(1/m-1/e). VSOP elements from Simon, J.L.; Bretagnon, P.; Chapront, J.; Chapront-Touzé, M.; Francou, G.; Laskar, J. (February 1994). "Numerical expressions for precession formulae and mean elements for the Moon and planets". Astronomy and Astrophysics 282 (2): 669. Bibcode:1994A&A...282..663S. - Jean Meeus, Astronomical Algorithms (Richmond, VA: Willmann-Bell, 1998) p 354. From 1900–2100, the shortest time from one new moon to the next is 29 days, 6 hours, and 35 min, and the longest 29 days, 19 hours, and 55 min. - C.D. Murray; S.F. Dermott (1999). Solar System Dynamics. Cambridge University Press. p. 184. - Dickinson, Terence (1993). From the Big Bang to Planet X. Camden East, Ontario: Camden House. pp. 79–81. ISBN 0-921820-71-2. - Caltech Scientists Predict Greater Longevity for Planets with Life - The reference by H. L. Vacher (2001) (details separately cited in this list) describes this as 'convex outward', whereas older references such as "The Moon's Orbit Around the Sun, Turner, A. B. Journal of the Royal Astronomical Society of Canada, Vol. 6, p. 117, 1912JRASC...6..117T"; and "H Godfray, Elementary Treatise on the Lunar Theory" describe the same geometry by the words concave to the sun. - Aslaksen, Helmer (2010). "The Orbit of the Moon around the Sun is Convex!". Retrieved 2006-04-21. - The Moon Always Veers Toward the Sun at MathPages - Vacher, H.L. (November 2001). "Computational Geology 18 – Definition and the Concept of Set" (PDF). Journal of Geoscience Education 49 (5): 470–479. Retrieved 2006-04-21.
https://en.wikipedia.org/wiki/Moon_orbit
4.09375
Nuclear fusion and nuclear fission are different types of reactions that release energy due to the presence of high-powered atomic bonds between particles found within a nucleus. In fission, an atom is split into two or more smaller, lighter atoms. Fusion, in contrast, occurs when two or more smaller atoms fuse together, creating a larger, heavier atom. |Nuclear Fission||Nuclear Fusion| |Definition||Fission is the splitting of a large atom into two or more smaller ones.||Fusion is the fusing of two or more lighter atoms into a larger one.| |Natural occurrence of the process||Fission reaction does not normally occur in nature.||Fusion occurs in stars, such as the sun.| |Byproducts of the reaction||Fission produces many highly radioactive particles.||Few radioactive particles are produced by fusion reaction, but if a fission "trigger" is used, radioactive particles will result from that.| |Conditions||Critical mass of the substance and high-speed neutrons are required.||High density, high temperature environment is required.| |Energy Requirement||Takes little energy to split two atoms in a fission reaction.||Extremely high energy is required to bring two or more protons close enough that nuclear forces overcome their electrostatic repulsion.| |Energy Released||The energy released by fission is a million times greater than that released in chemical reactions, but lower than the energy released by nuclear fusion.||The energy released by fusion is three to four times greater than the energy released by fission.| |Nuclear weapon||One class of nuclear weapon is a fission bomb, also known as an atomic bomb or atom bomb.||One class of nuclear weapon is the hydrogen bomb, which uses a fission reaction to "trigger" a fusion reaction.| |Energy production||Fission is used in nuclear power plants.||Fusion is an experimental technology for producing power.| |Fuel||Uranium is the primary fuel used in power plants.||Hydrogen isotopes (Deuterium and Tritium) are the primary fuel used in experimental fusion power plants.| Nuclear fusion is the reaction in which two or more nuclei combine, forming a new element with a higher atomic number (more protons in the nucleus). The energy released in fusion is related to E = mc 2 (Einstein’s famous energy-mass equation). On Earth, the most likely fusion reaction is Deuterium–Tritium reaction. Deuterium and Tritium are isotopes of hydrogen. 2 1Deuterium + 3 1Tritium = 42He + 10n + 17.6 MeV Nuclear fission is the splitting of a massive nucleus into photons in the form of gamma rays, free neutrons, and other subatomic particles. In a typical nuclear reaction involving 235U and a neutron: 23592U + n = 23692U 23692U = 14456Ba + 89 36Kr + 3n + 177 MeV Fission vs. Fusion Physics Atoms are held together by two of the four fundamental forces of nature: the weak and strong nuclear bonds. The total amount of energy held within the bonds of atoms is called binding energy. The more binding energy held within the bonds, the more stable the atom. Moreover, atoms try to become more stable by increasing their binding energy. The nucleon of an iron atom is the most stable nucleon found in nature, and it neither fuses nor splits. This is why iron is at the top of the binding energy curve. For atomic nuclei lighter than iron and nickel, energy can be extracted by combining iron and nickel nuclei together through nuclear fusion. In contrast, for atomic nuclei heavier than iron or nickel, energy can be released by splitting the heavy nuclei through nuclear fission. The notion of splitting the atom arose from New Zealand-born British physicist Ernest Rutherford's work, which also led to the discovery of the proton. Conditions for Fission and Fusion Fission can only occur in large isotopes that contain more neutrons than protons in their nuclei, which leads to a slightly stable environment. Although scientists don't yet fully understand why this instability is so helpful for fission, the general theory is that the large number of protons create a strong repulsive force between them and that too few or too many neutrons create "gaps" that cause weakening of the nuclear bond, leading to decay (radiation). These large nucleii with more "gaps" can be "split" by the impact of thermal neutrons, so called "slow" neutrons. Conditions must be right for a fission reaction to occur. For fission to be self-sustaining, the substance must reach critical mass, the minimum amount of mass required; falling short of critical mass limits reaction length to mere microseconds. If critical mass is reached too quickly, meaning too many neutrons are released in nanoseconds, the reaction becomes purely explosive, and no powerful release of energy will occur. Nuclear reactors are mostly controlled fission systems that use magnetic fields to contain stray neutrons; this creates a roughly 1:1 ratio of neutron release, meaning one neutron emerges from the impact of one neutron. As this number will vary in mathematical proportions, under what is known as Gaussian distribution, the magnetic field must be maintained for the reactor to function, and control rods must be used to slow down or speed up neutron activity. Fusion happens when two lighter elements are forced together by enormous energy (pressure and heat) until they fuse into another isotope and release energy. The energy needed to start a fusion reaction is so large that it takes an atomic explosion to produce this reaction. Still, once fusion begins, it can theoretically continue to produce energy as long as it is controlled and the basic fusing isotopes are supplied. The most common form of fusion, which occurs in stars, is called "D-T fusion," referring to two hydrogen isotopes: deuterium and tritium. Deuterium has 2 neutrons and tritium has 3, more than the one proton of hydrogen. This makes the fusion process easier as only the charge between two protons needs to be overcome, because fusing the neutrons and the proton requires overcoming the natural repellent force of like-charged particles (protons have a positive charge, compared to neutrons' lack of charge) and a temperature — for an instant — of close to 81 million degrees Fahrenheit for D-T fusion (45 million Kelvin or slightly less in Celsius). For comparison, the sun's core temperature is roughly 27 million F (15 million C). Once this temperature is reached, the resulting fusion has to be contained long enough to generate plasma, one of the four states of matter. The result of such containment is a release of energy from the D-T reaction, producing helium (a noble gas, inert to every reaction) and spare neutrons than can "seed" hydrogen for more fusion reactions. At present, there are no secure ways to induce the initial fusion temperature or contain the fusing reaction to achieve a steady plasma state, but efforts are ongoing. A third type of reactor is called a breeder reactor. It works by using fission to create plutonium that can seed or serve as fuel for other reactors. Breeder reactors are used extensively in France, but are prohibitively expensive and require significant security measures, as the output of these reactors can be used for making nuclear weapons as well. Fission and fusion nuclear reactions are chain reactions, meaning that one nuclear event causes at least one other nuclear reaction, and typically more. The result is an increasing cycle of reactions that can quickly become uncontrolled. This type of nuclear reaction can be multiple splits of heavy isotopes (e.g. 235 U) or the merging of light isotopes (e.g. 2H and 3H). Fission chain reactions happen when neutrons bombard unstable isotopes. This type of "impact and scatter" process is difficult to control, but the initial conditions are relatively simple to achieve. A fusion chain reaction develops only under extreme pressure and temperature conditions that remain stable by the energy released in the fusion process. Both the initial conditions and stabilizing fields are very difficult to carry out with current technology. Fusion reactions release 3-4 times more energy than fission reactions. Although there are no Earth-based fusion systems, the sun's output is typical of fusion energy production in that it constantly converts hydrogen isotopes into helium, emitting spectra of light and heat. Fission generates its energy by breaking down one nuclear force (the strong one) and releasing tremendous amounts of heat than are used to heat water (in a reactor) to then generate energy (electricity). Fusion overcomes 2 nuclear forces (strong and weak), and the energy released can be used directly to power a generator; so not only is more energy released, it can also be harnessed for more direct application. Nuclear Energy Use The first experimental nuclear reactor for energy production began operating in Chalk River, Ontario, in 1947. The first nuclear energy facility in the U.S., the Experimental Breeder Reactor-1, was launched shortly thereafter, in 1951; it could light 4 bulbs. Three years later, in 1954, the U.S. launched its first nuclear submarine, the U.S.S. Nautilus, while the U.S.S.R. launched the world's first nuclear reactor for large-scale power generation, in Obninsk. The U.S. inaugurated its nuclear power production facility a year later, lighting up Arco, Idaho (pop. 1,000). The first commercial facility for energy production using nuclear reactors was the Calder Hall Plant, in Windscale (now Sellafield), Great Britain. It was also the site of the first nuclear-related accident in 1957, when a fire broke out due to radiation leaks. The first large-scale U.S. nuclear plant opened in Shippingport, Pennsylvania, in 1957. Between 1956 and 1973, nearly 40 power production nuclear reactors were launched in the U.S., the largest being Unit One of the Zion Nuclear Power Station in Illinois, with a capacity of 1,155 megawatts. No other reactors ordered since have come online, though others were launched after 1973. The French launched their first nuclear reactor, the Phénix, capable of producing 250 megawatts of power, in 1973. The most powerful energy-producing reactor in the U.S. (1,315 MW) opened in 1976, at Trojan Power Plant in Oregon. By 1977, the U.S. had 63 nuclear plants in operation, providing 3% of the nation's energy needs. Another 70 were scheduled to come online by 1990. Unit Two at Three Mile Island suffered a partial meltdown, releasing inert gases (xenon and krypton) into the environment. The anti-nuclear movement gained strength from the fears the incident caused. Fears were fueled even more in 1985, when Unit 4 at the Chernobyl plant in Ukraine suffered a runaway nuclear reaction that exploded the facility, spreading radioactive material throughout the area and a large part of Europe. During the 1990s, Germany and especially France expanded their nuclear plants, focusing on smaller and thus more controllable reactors. China launched its first 2 nuclear facilities in 2007, producing a total of 1,866 MW. Although nuclear energy ranks third behind coal and hydropower in global wattage produced, the push to close nuclear plants, coupled with the increasing costs to build and operate such facilities, has created a pull-back on the use of nuclear energy for power. France leads the world in percentage of electricity produced by nuclear reactors, but in Germany, solar has overtaken nuclear as an energy producer. The U.S. still has over 60 nuclear facilities in operation, but ballot initiatives and reactor ages have closed plants in Oregon and Washington, while dozens more are targeted by protesters and environmental protection groups. At present, only China appears to be expanding its number of nuclear plants, as it seeks to reduce its heavy dependence on coal (the major factor in its extremely high pollution rate) and seek an alternative to importing oil. The fear of nuclear energy comes from its extremes, as both a weapon and power source. Fission from a reactor creates waste material that is inherently dangerous (see more below) and could be suitable for dirty bombs. Though several countries, such as Germany and France, have excellent track records with their nuclear facilities, other less positive examples, such as those seen in Three Mile Island, Chernobyl, and Fukushima, have made many reluctant to accept nuclear energy, even though it is much safer than fossil fuel. Fusion reactors could one day be the affordable, plentiful energy source that is needed, but only if the extreme conditions needed for creating fusion and managing it can be solved. The byproduct of fission is radioactive waste that takes thousands of years to lose its dangerous levels of radiation. This means that nuclear fission reactors must also have safeguards for this waste and its transport to uninhabited storage or dump sites. For more information on this, read about the management of radioactive waste. In nature, fusion occurs in stars, such as the sun. On Earth, nuclear fusion was first achieved in the creation of the hydrogen bomb. Fusion has also been used in different experimental devices, often with the hope of producing energy in a controlled fashion. On the other hand, fission is a nuclear process that does not normally occur in nature, as it requires a large mass and an incident neutron. Even so, there have been examples of nuclear fission in natural reactors. This was discovered in 1972 when uranium deposits from an Oklo, Gabon, mine were found to have once sustained a natural fission reaction some 2 billion years ago. In brief, if a fission reaction gets out of control, either it explodes or the reactor generating it melts down into a large pile of radioactive slag. Such explosions or meltdowns release tons of radioactive particles into the air and any neighboring surface (land or water), contaminating it every minute the reaction continues. In contrast, a fusion reaction that loses control (becomes unbalanced) slows down and drops temperature until it stops. This is what happens to stars as they burn their hydrogen into helium and lose these elements over thousands of centuries of expulsion. Fusion produces little radioactive waste. If there is any damage, it will happen to the immediate surroundings of the fusion reactor and little else. It is far safer to use fusion to produce power, but fission is used because it takes less energy to split two atoms than it does to fuse two atoms. Also, the technical challenges involved in controlling fusion reactions have not been overcome yet. Use of Nuclear Weapons All nuclear weapons require a nuclear fission reaction to work, but "pure" fission bombs, those that use a fission reaction alone, are known as atomic, or atom, bombs. Atom bombs were first tested in New Mexico in 1945, during the height of World War II. In the same year, the United States used them as a weapon in Hiroshima and Nagasaki, Japan. Since the atom bomb, most of the nuclear weapons that have been proposed and/or engineered have enhanced fission reaction(s) in one way or another (e.g., see boosted fission weapon, radiological bombs, and neutron bombs). Thermonuclear weaponry — a weapon that uses both fission and hydrogen-based fusion — is one of the better-known weapon advancements. Though the notion of a thermonuclear weapon was proposed as early as 1941, it was not until the early 1950s that the hydrogen bomb (H-bomb) was first tested. Unlike atom bombs, hydrogen bombs have not been used in warfare, only tested (e.g., see Tsar Bomba). To date, no nuclear weapon makes use of nuclear fusion alone, though governmental defense programs have put considerable research into such a possibility. Fission is a powerful form of energy production, but it comes with built-in inefficiencies. The nuclear fuel, usually Uranium-235, is expensive to mine and purify. The fission reaction creates heat that is used to boil water for steam to turn a turbine that generates electricity. This transformation from heat energy to electrical energy is cumbersome and expensive. A third source of inefficiency is that clean-up and storage of nuclear waste is very expensive. Waste is radioactive, requiring proper disposal, and security must be tight to ensure public safety. For fusion to occur, the atoms must be confined in the magnetic field and raised to a temperature of 100 million Kelvin or more. This takes an enormous amount of energy to initiate fusion (atom bombs and lasers are thought to provide that "spark"), but there's also the need to properly contain the plasma field for long-term energy production. Researchers are still trying to overcome these challenges because fusion a safer and more powerful energy production system than fission, meaning it would ultimately cost less than fission.
http://www.diffen.com/difference/Nuclear_Fission_vs_Nuclear_Fusion
4.09375
1. From birth to 19 years of age, children and young people tend to follow a broad developmental plan. Although children and young people are different, the way they grow and develop is often quite similar. This means we can work out a pattern for development and from this we can pinpoint particular skills or milestones that most children can do at different age ranges. Milestones describe when particular skills are achieved, such as walking, usually achieved by 18 months. These milestones have been draw up by researchers looking at children’s development and working out an average from their recordings. However as children grow older the variations between individuals grow larger. This is especially true when it comes to learning skills such as reading or mathematics, but it is also true in terms of their emotional maturity, this makes it harder to draw up a pattern of development. Babies at Birth Most babies are born around the 40th week of pregnancy. Only 3% of babies arrive exactly on time. Some are a week early or a week late. Babies who are born earlier than the 37th week are known as premature. Premature babies are likely to need more time to reach the same developmental targets as babies born around the 40th week. Many people think that babies are helpless when they are born, but in reality they are born with the ability to do a quite a few things. They can recognise their mothers voice and smell. They are able to cry to let everyone they need help. They also actively learn about their new world through their senses, particularly touch, taste and sound. These are things you may expect to observe in a new born:- Reflexes babies are born with many reflexes, which are actions they do without thinking. Many reflexes are linked to survival. Here are some examples of these reflexes:- • Swallowing and sucking reflexes these ensure that the baby can feed and swallow milk. • Rooting reflex the baby will move its head to look for a nipple or teat. • Grasp reflex the baby will automatically put its fingers around an object that has touched the palm of its hand. • Startle reflex when babies hear a sudden sound or bright light, they will react by moving their arms outwards and clenching their fists. • Walking and standing reflex when babies are held upright with their feet on a firm surface, they usually make stepping movements. • Falling reflex this is known as the ‘Moro reflex’. Babies will stretch out their arms suddenly and then clasp inwards in any situations in which they feel as if they are falling. Communication and Intellectual Development Babies at birth cry in order to communicate their needs. They also begin to look around and react to sounds. Social, Emotional and behavioural Development Babies and their primary carers, usually their mothers, begin to develop a strong, close bond from very early on. You might see the baby at times stares at the mother and the mother is very aware of her baby. Babies at one month In a short month, babies have changed already. They may appeared less curled up and more relaxed. Babies at one month have usually settled in to a pattern. They sleep quite a lot of the time, but will gradually start to spend longer time awake. They cry to communicate their needs and their parents may be starting to understand the different types of cries. Babies too are learning about their parents or carers. They may stop crying when they hear soothing voices. They also try hard to focus on the face of whoever is holding them. These are things you may expect to observe in a baby at 1 month:- Some reflexes are not as strong as at birth. Communication and Intellectual Development...
http://www.studymode.com/essays/Main-Stages-Of-Child-Development-From-796019.html
4.3125
Registers could potentially be the most important part of a computer. A register temporarily stores a value during the operation of a computer. The 8-bit computer described in this Instructable has two registers attached to its ALU, a register to store the current instruction and a register for the output of the computer. Depending on the chip, a register will have 2 or 3 control pins. The registers that we will be using have two control pins: output enable and input enable (both active when low). When the output enable pin is connected to ground the currently stored binary word is sent out across the output pins. When the input pin is connected to ground the binary word present on the input pins is loaded into the register. An example of the use of a register on a computer is the accumulator on the ALU (arithmetic logic unit that performs mathematical operations). The accumulator is like the scratchpad for the computer that stores the output of the ALU. The accumulator is also the first input for the ALU. The B register is the second input. For an addition operation, the first value is loaded into the accumulator. After that the second value to be added to the first value is loaded into the B register. The outputs of the accumulator and B register are fused open and are constantly feeding into the ALU. The final step for addition is to transfer the output of the operation into the Accumulator. Registers all operate on a shared data line called the bus. The bus is a group of wires equal in number to the architecture of any CPU. This is really putting the horse before the cart considering bus width is the defining measurement for CPU architecture. Since a digital 1 means positive voltage, and a 0 means grounding, it would be impossible to have all registers share the same bus without giving them the ability to selectively connect and disconnect from the bus. Luckily for us, there is a third state between 1 and 0 that is ambivalent to current imput that works great for this. Enter the tri-state buffer: a chip that allows you to selectively connect groups of wires to a bus. Using some of these tri-state-buffers, you can have every register and chip on the entire computer needing of communication share the same wires as a bus. In the case of my computer, it was an 8-wire wide band of breadboard slots that spanned the bottom pins of the breadboard. Experiment around with busses, since they carry all of the information from piece to piece in the computer a faulty buss could mean erroneous data that ripples down the line. The great thing about building an 8-bit computer is that most parts will cost you less than a dollar a piece if you buy them from the correct place. I purchased 90% of my parts from Jameco Electronics and I have been completely satisfied with their services. The only parts I have really bought from anywhere else are the breadboards and breadboard wires (and the Numitron tubes). These can be found considerably cheaper on sites like Amazon. Always be sure to make sure the parts that you are ordering are the correct ones. Every part that you buy should have a datasheet available online that explains all of the functions and limitations of the item that you are buying. Make sure to keep these organized as you will be using many datasheets in the construction of your computer. To help you with your computer I will list the parts that I used for mine: 74161 - http://www.jameco.com/webapp/wcs/stores/servlet/ProductDisplay?freeText=74161&langId=-1&storeId=10001&productId=49664&search_type=jamecoall&catalogId=10001&ddkey=http:StoreCatalogDrillDownView 4-Bit Register (I use two for each 8-bit register): 74LS173 - http://www.jameco.com/webapp/wcs/stores/servlet/ProductDisplay?freeText=74LS173&langId=-1&storeId=10001&productId=46922&search_type=jamecoall&catalogId=10001&ddkey=http:StoreCatalogDrillDownView 74LS157 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_46771_-1 16x8 RAM (output needs to be inverted): 74189 - http://www.jameco.com/webapp/wcs/stores/servlet/ProductDisplay?freeText=74189&langId=-1&storeId=10001&productId=49883&search_type=jamecoall&catalogId=10001&ddkey=http:StoreCatalogDrillDownView 74LS283 - http://www.jameco.com/webapp/wcs/stores/servlet/ProductDisplay?freeText=74LS283&langId=-1&storeId=10001&productId=47423&search_type=all&catalogId=10001&ddkey=http:StoreCatalogDrillDownView 74S244 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_910750_-1 74LS86 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_295751_-1 74LS08 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_295401_-1 74LS02 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_283741_-1 74LS04 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_283792_-1 CD4029 - http://www.jameco.com/webapp/wcs/stores/servlet/ProductDisplay?freeText=4029&langId=-1&storeId=10001&productId=12925&search_type=jamecoall&catalogId=10001&ddkey=http:StoreCatalogDrillDownView 74LS10 - http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_295427_-1
http://www.instructables.com/id/How-to-Build-an-8-Bit-Computer/?ALLSTEPS
4.46875
Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made. Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students. Reptiles originated approximately 300 million years ago during the Carboniferous period. One of the oldest-known amniotes is Casineria, which had both amphibian and reptilian characteristics. One of the earliest undisputed reptiles was Hylonomus. Soon after the first amniotes appeared, they diverged into three groups (synapsids, anapsids, and diapsids) during the Permian period. The Permian period also saw a second major divergence of diapsid reptiles into archosaurs (predecessors of crocodilians and dinosaurs) and lepidosaurs (predecessors of snakes and lizards). These groups remained inconspicuous until the Triassic period when the archosaurs became the dominant terrestrial group due to the extinction of large-bodied anapsids and synapsids during the Permian-Triassic extinction. About 250 million years ago, archosaurs radiated into the dinosaurs and the pterosaurs. Although they are sometimes mistakenly called dinosaurs, the pterosaurs were distinct from true dinosaurs . Pterosaurs had a number of adaptations that allowed for flight, including hollow bones (birds also exhibit hollow bones, a case of convergent evolution). Their wings were formed by membranes of skin that attached to the long, fourth finger of each arm and extended along the body to the legs. The dinosaurs were a diverse group of terrestrial reptiles with more than 1,000 species identified to date. Paleontologists continue to discover new species of dinosaurs. Some dinosaurs were quadrupeds; others were bipeds . Some were carnivorous, whereas others were herbivorous. Dinosaurs laid eggs; a number of nests containing fossilized eggs have been found. It is not known whether dinosaurs were endotherms or ectotherms. However, given that modern birds are endothermic, the dinosaurs that served as ancestors to birds were probably endothermic as well. Some fossil evidence exists for dinosaurian parental care. Comparative biology supports this hypothesis since the archosaur birds and crocodilians display parental care. Dinosaurs dominated the Mesozoic Era, which was known as the "Age of Reptiles." The dominance of dinosaurs lasted until the end of the Cretaceous period, the end of the Mesozoic Era. The Cretaceous-Tertiary extinction resulted in the loss of most of the large-bodied animals of the Mesozoic Era. Birds are the only living descendants of one of the major clades of dinosaurs. Dinosaurs dominated the earth until the Permian-Triassic extinction., Dinosaurs and pterosaurs evolved from archosaurs., Pterosaurs, reptiles with wings, evolved from dinosaurs., or Lizards evolved from dinosaurs during the Permian period.
https://www.boundless.com/biology/textbooks/boundless-biology-textbook/vertebrates-29/reptiles-174/evolution-of-reptiles-673-11895/
4.75
After the sun spun to light, the planets of the solar system began to form. But it took another hundred million years for Earth's moon to spring into existence. There are three theories as to how our planet's satellite could have been created: the giant impact hypothesis, the co-formation theory and the capture theory. Giant impact hypothesis This is the prevailing theory supported by the scientific community. Like the other planets, the Earth formed from the leftover cloud of dust and gas orbiting the young sun. The early solar system was a violent place, and a number of bodies were created that never made it to full planetary status. According to the giant impact hypothesis, one of these crashed into Earth not long after the young planet was created. Known as Theia, the Mars-size body collided with Earth, throwing vaporized chunks of the young planet's crust into space. Gravity bound the ejected particles together, creating a moon that is the largest in the solar system in relation to its host planet. This sort of formation would explain why the moon is made up predominantly of lighter elements, making it less dense than Earth — the material that formed it came from the crust, while leaving the planet's rocky core untouched. As the material drew together around what was left of Theia's core, it would have centered near Earth's ecliptic plane, the path the sun travels through the sky, which is where the moon orbits today. Moons can also form at the same time as their parent planet. Under such an explanation, gravity would have caused material in the early solar system to draw together at the same time as gravity bound particles together to form Earth. Such a moon would have a very similar composition to the planet, and would explain the moon's present location. However, although Earth and the moon share much of the same material, the moon is much less dense than our planet, which would likely not be the case if both started with the same heavy elements at their core. Perhaps Earth's gravity snagged a passing body, as happened with other moons in the solar system, such as the Martian moons of Phobos and Deimos. Under the capture theory, a rocky body formed elsewhere in the solar system could have been drawn into orbit around the Earth. The capture theory would explain the differences in the composition of the Earth and its moon. However, such orbiters are often oddly shaped, rather than being spherical bodies like the moon. Their paths don't tend to line up with the ecliptic of their parent planet, also unlike the moon. Although the co-formation theory and the capture theory both explain some elements of the existence of the moon, they leave many questions unanswered. At present, the giant impact hypothesis seems to cover many of these questions, making it the best model to fit the scientific evidence for how the moon was created.
http://www.space.com/19275-moon-formation.html
4.09375
The Rankine cycle is the fundamental operating cycle of all power plants where an operating fluid is continuously evaporated and condensed. The selection of operating fluid depends mainly on the available temperature range. Figure 1 shows the idealized Rankine cycle. The pressure-enthalpy (p-h) and temperature-entropy (T-s) diagrams of this cycle are given in Figure 2. The Rankine cycle operates in the following steps: 1-2-3 Isobaric Heat Transfer. High pressure liquid enters the boiler from the feed pump (1) and is heated to the saturation temperature (2). Further addition of energy causes evaporation of the liquid until it is fully converted to saturated steam (3). 3-4 Isentropic Expansion. The vapor is expanded in the turbine, thus producing work which may be converted to electricity. In practice, the expansion is limited by the temperature of the cooling medium and by the erosion of the turbine blades by liquid entrainment in the vapor stream as the process moves further into the two-phase region. Exit vapor qualities should be greater than 90%. 4-5 Isobaric Heat Rejection. The vapor-liquid mixture leaving the turbine (4) is condensed at low pressure, usually in a surface condenser using cooling water. In well designed and maintained condensers, the pressure of the vapor is well below atmospheric pressure, approaching the saturation pressure of the operating fluid at the cooling water temperature. 5-1 Isentropic Compression. The pressure of the condensate is raised in the feed pump. Because of the low specific volume of liquids, the pump work is relatively small and often neglected in thermodynamic calculations. The efficiency of power cycles is defined as Values of heat and work can be determined by applying the First Law of Thermodynamics to each step. The steam quality x at the turbine outlet is determined from the assumption of isentropic expansion, i.e., where is the entropy of vapor and Si* the entropy of liquid. The efficiency of the ideal Rankine cycle as described in the previous section is close to the Carnot efficiency (see Carnot Cycle). In real plants, each stage of the Rankine cycle is associated with irreversible processes, reducing the overall efficiency. Turbine and pump irreversibilities can be included in the calculation of the overall cycle efficiency by defining a turbine efficiency according to Figure 3 where subscript act indicates actual values and subscript is indicates isentropic values and a pump efficiency If ηt and ηp are known, the actual enthalpy after the compression and expansion steps can be determined from the values for the isentropic processes. The turbine efficiency directly reduces the work produced in the turbine and, therefore the overall efficiency. The inefficiency of the pump increases the enthalpy of the liquid leaving the pump and, therefore, reduces the amount of energy required to evaporate the liquid. However, the energy to drive the pump is usually more expensive than the energy to feed the boiler. Even the most sophisticated boilers transform only 40% of the fuel energy into useable steam energy. There are two main reasons for this wastage: The combustion gas temperatures are between 1000°C and 2000°C, which is considerably higher than the highest vapor temperatures. The transfer of heat across a large temperature difference increases the entropy. Combustion (oxidation) at technically feasible temperatures is highly irreversible. Since the heat transfer surface in the condenser has a finite value, the condensation will occur at a temperature higher than the temperature of the cooling medium. Again, heat transfer occurs across a temperature difference, causing the generation of entropy. The deposition of dirt in condensers during operation with cooling water reduces the efficiency. The net work produced in the Rankine cycle is represented by the area of the cycle process in Figure 2. Obviously, this area can be increased by increasing the pressure in the boiler and reducing the pressure in the condenser. The irreversibility of any process is reduced if it is performed as close as possible to the temperatures of the high temperature and low temperature reservoirs. This is achieved by operating the condenser at subatmospheric pressure. The temperature in the boiler is limited by the saturation pressure. Further increase in temperature is possible by superheating the saturated vapor, see Figure 4. This has the additional advantage that the vapor quality after the turbine is increased and, therefore the erosion of the turbine blades is reduced. It is quite common to reheat the vapor after expansion in the high pressure turbine and expand the reheated vapor in a second, low pressure turbine. The cold liquid leaving the feed pump is mixed with the saturated liquid in the boiler and/or re-heated to the boiling temperature. The resulting irreversibility reduces the efficiency of the boiler. According to the Carnot process, the highest efficiency is reached if heat transfer occurs isothermally. To preheat the feed liquid to its saturation temperature, bleed vapor from various positions of the turbine is passed through external heat exchangers (regenerators), as shown in Figure 5. Ideally, the temperature of the bleed steam should be as close as possible to the temperature of the feed liquid. The high combustion temperature of the fuel is better utilized if a gas turbine or Brayton engine is used as "topping cycle" in conjunction with a Rankine cycle. In this case, the hot gas leaving the turbine is used to provide the energy input to the boiler. In co-generation systems, the energy rejected by the Rankine cycle is used for space heating, process steam or other low temperature applications.
http://www.thermopedia.com/content/1072/
4.09375
A black American writer, J. Saunders Redding, describes the arrival of a ship in North America in the year 1619: Sails furled, flag drooping at her rounded stern, she rode the tide in from the sea. She was a strange ship, indeed, by all accounts, a frightening ship, a ship of mystery. Whether she was trader, privateer, or man-of-war no one knows. Through her bulwarks black-mouthed cannon yawned. The flag she flew was Dutch; her crew a motley. Her port of call, an English settlement, Jamestown, in the colony of Virginia. She came, she traded, and shortly afterwards was gone. Probably no ship in modern history has carried a more portentous freight. Her cargo? Twenty slaves. There is not a country in world history in which racism has been more important, for so long a time, as the United States. And the problem of "the color line," as W. E. B. Du Bois put it, is still with us. So it is more than a purely historical question to ask: How does it start?—and an even more urgent question: How might it end? Or, to put it differently: Is it possible for whites and blacks to live together without hatred? If history can help answer these questions, then the beginnings of slavery in North America—a continent where we can trace the coming of the first whites and the first blacks—might supply at least a few clues. Some historians think those first blacks in Virginia were considered as servants, like the white indentured servants brought from Europe. But the strong probability is that, even if they were listed as "servants" (a more familiar category to the English), they were viewed as being different from white servants, were treated differently, and in fact were slaves. In any case, slavery developed quickly into a regular institution, into the normal labor relation of blacks to whites in the New World. With it developed that special racial feeling—whether hatred, or contempt, or pity, or patronization—that accompanied the inferior position of blacks in America for the next 350 years —that combination of inferior status and derogatory thought we call racism. Everything in the experience of the first white settlers acted as a pressure for the enslavement of blacks. The Virginians of 1619 were desperate for labor, to grow enough food to stay alive. Among them were survivors from the winter of 1609-1610, the "starving time," when, crazed for want of food, they roamed the woods for nuts and berries, dug up graves to eat the corpses, and died in batches until five hundred colonists were reduced to sixty. In the Journals of the House of Burgesses of Virginia is a document of 1619 which tells of the first twelve years of the Jamestown colony. The first settlement had a hundred persons, who had one small ladle of barley per meal. When more people arrived, there was even less food. Many of the people lived in cavelike holes dug into the ground, and in the winter of 1609-1610, they were ...driven through insufferable hunger to eat those things which nature most abhorred, the flesh and excrements of man as well of our own nation as of an Indian, digged by some out of his grave after he had laid buried there days and wholly devoured him; others, envying the better state of body of any whom hunger has not yet so much wasted as their own, lay wait and threatened to kill and eat them; one among them slew his wife as she slept in his bosom, cut her in pieces, salted her and fed upon her till he had clean devoured all parts saving her head... A petition by thirty colonists to the House of Burgesses, complaining against the twelve-year governorship of Sir Thomas Smith, said: In those 12 years of Sir Thomas Smith, his government, we aver that the colony for the most part remained in great want and misery under most severe and cruel laws... The allowance in those times for a man was only eight ounces of meale and half a pint of peas for a day... mouldy, rotten, full of cobwebs and maggots, loathsome to man and not fit for beasts, which forced many to flee for relief to the savage enemy, who being taken again were put to sundry deaths as by hanging, shooting and breaking upon the wheel... of whom one for stealing two or three pints of oatmeal had a bodkin thrust through his tongue and was tied with a chain to a tree until he starved... The Virginians needed labor, to grow corn for subsistence, to grow tobacco for export. They had just figured out how to grow tobacco, and in 1617 they sent off the first cargo to England. Finding that, like all pleasureable drugs tainted with moral disapproval, it brought a high price, the planters, despite their high religious talk, were not going to ask questions about something so profitable. They couldn't force the Indians to work for them, as Columbus had done. They were outnumbered, and while, with superior firearms, they could massacre Indians, they would face massacre in return. They could not capture them and keep them enslaved; the Indians were tough, resourceful, defiant, and at home in these woods, as the transplanted Englishmen were not. White servants had not yet been brought over in sufficient quantity. Besides, they did not come out of slavery, and did not have to do more than contract their labor for a few years to get their passage and a start in the New World. As for the free white settlers, many of them were skilled craftsmen, or even men of leisure back in England, who were so little inclined to work the land that John Smith, in those early years, had to declare a kind of martial law, organize them into work gangs, and force them into the fields for survival. There may have been a kind of frustrated rage at their own ineptitude, at the Indian superiority at taking care of themselves, that made the Virginians especially ready to become the masters of slaves. Edmund Morgan imagines their mood as he writes in his book American Slavery, American Freedom: If you were a colonist, you knew that your technology was superior to the Indians'. You knew that you were civilized, and they were savages... But your superior technology had proved insufficient to extract anything. The Indians, keeping to themselves, laughed at your superior methods and lived from the land more abundantly and with less labor than you did... And when your own people started deserting in order to live with them, it was too much... So you killed the Indians, tortured them, burned their villages, burned their cornfields. It proved your superiority, in spite of your failures. And you gave similar treatment to any of your own people who succumbed to their savage ways of life. But you still did not grow much corn... Black slaves were the answer. And it was natural to consider imported blacks as slaves, even if the institution of slavery would not be regularized and legalized for several decades. Because, by 1619, a million blacks had already been brought from Africa to South America and the Caribbean, to the Portuguese and Spanish colonies, to work as slaves. Fifty years before Columbus, the Portuguese took ten African blacks to Lisbon—this was the start of a regular trade in slaves. African blacks had been stamped as slave labor for a hundred years. So it would have been strange if those twenty blacks, forcibly transported to Jamestown, and sold as objects to settlers anxious for a steadfast source of labor, were considered as anything but slaves. Their helplessness made enslavement easier. The Indians were on their own land. The whites were in their own European culture. The blacks had been torn from their land and culture, forced into a situation where the heritage of language, dress, custom, family relations, was bit by bit obliterated except for remnants that blacks could hold on to by sheer, extraordinary persistence. Was their culture inferior—and so subject to easy destruction? Inferior in military capability, yes —vulnerable to whites with guns and ships. But in no other way—except that cultures that are different are often taken as inferior, especially when such a judgment is practical and profitable. Even militarily, while the Westerners could secure forts on the African coast, they were unable to subdue the interior and had to come to terms with its chiefs. The African civilization was as advanced in its own way as that of Europe. In certain ways, it was more admirable; but it also included cruelties, hierarchical privilege, and the readiness to sacrifice human lives for religion or profit. It was a civilization of 100 million people, using iron implements and skilled in farming. It had large urban centers and remarkable achievements in weaving, ceramics, sculpture. European travelers in the sixteenth century were impressed with the African kingdoms of Timbuktu and Mali, already stable and organized at a time when European states were just beginning to develop into the modern nation. In 1563, Ramusio, secretary to the rulers in Venice, wrote to the Italian merchants: "Let them go and do business with the King of Timbuktu and Mali and there is no doubt that they will be well-received there with their ships and their goods and treated well, and granted the favours that they ask..." A Dutch report, around 1602, on the West African kingdom of Benin, said: "The Towne seemeth to be very great, when you enter it. You go into a great broad street, not paved, which seemeth to be seven or eight times broader than the Warmoes Street in Amsterdam. ...The Houses in this Towne stand in good order, one close and even with the other, as the Houses in Holland stand." The inhabitants of the Guinea Coast were described by one traveler around 1680 as "very civil and good-natured people, easy to be dealt with, condescending to what Europeans require of them in a civil way, and very ready to return double the presents we make them." Africa had a kind of feudalism, like Europe based on agriculture, and with hierarchies of lords and vassals. But African feudalism did not come, as did Europe's, out of the slave societies of Greece and Rome, which had destroyed ancient tribal life. In Africa, tribal life was still powerful, and some of its better features—a communal spirit, more kindness in law and punishment—still existed. And because the lords did not have the weapons that European lords had, they could not command obedience as easily. In his book The African Slave Trade, Basil Davidson contrasts law in the Congo in the early sixteenth century with law in Portugal and England. In those European countries, where the idea of private property was becoming powerful, theft was punished brutally. In England, even as late as 1740, a child could be hanged for stealing a rag of cotton. But in the Congo, communal life persisted, the idea of private property was a strange one, and thefts were punished with fines or various degrees of servitude. A Congolese leader, told of the Portuguese legal codes, asked a Portuguese once, teasingly: "What is the penalty in Portugal for anyone who puts his feet on the ground?" Slavery existed in the African states, and it was sometimes used by Europeans to justify their own slave trade. But, as Davidson points out, the "slaves" of Africa were more like the serfs of Europe —in other words, like most of the population of Europe. It was a harsh servitude, but they had rights which slaves brought to America did not have, and they were "altogether different from the human cattle of the slave ships and the American plantations." In the Ashanti Kingdom of West Africa, one observer noted that "a slave might marry; own property; himself own a slave; swear an oath; be a competent witness and ultimately become heir to his master... An Ashanti slave, nine cases out of ten, possibly became an adopted member of the family, and in time his descendants so merged and intermarried with the owner's kinsmen that only a few would know their origin." One slave trader, John Newton (who later became an antislavery leader), wrote about the people of what is now Sierra Leone: The state of slavery, among these wild barbarous people, as we esteem them, is much milder than in our colonies. For as, on the one hand, they have no land in high cultivation, like our West India plantations, and therefore no call for that excessive, unintermitted labour, which exhausts our slaves: so, on the other hand, no man is permitted to draw blood even from a slave. African slavery is hardly to be praised. But it was far different from plantation or mining slavery in the Americas, which was lifelong, morally crippling, destructive of family ties, without hope of any future. African slavery lacked two elements that made American slavery the most cruel form of slavery in history: the frenzy for limitless profit that comes from capitalistic agriculture; the reduction of the slave to less than human status by the use of racial hatred, with that relentless clarity based on color, where white was master, black was slave. In fact, it was because they came from a settled culture, of tribal customs and family ties, of communal life and traditional ritual, that African blacks found themselves especially helpless when removed from this. They were captured in the interior (frequently by blacks caught up in the slave trade themselves), sold on the coast, then shoved into pens with blacks of other tribes, often speaking different languages. The conditions of capture and sale were crushing affirmations to the black African of his helplessness in the face of superior force. The marches to the coast, sometimes for 1,000 miles, with people shackled around the neck, under whip and gun, were death marches, in which two of every five blacks died. On the coast, they were kept in cages until they were picked and sold. One John Barbot, at the end of the seventeenth century, described these cages on the Gold Coast: As the slaves come down to Fida from the inland country, they are put into a booth or prison... near the beach, and when the Europeans are to receive them, they are brought out onto a large plain, where the ship's surgeons examine every part of everyone of them, to the smallest member, men and women being stark naked... Such as are allowed good and sound are set on one side... marked on the breast with a red- hot iron, imprinting the mark of the French, English or Dutch companies... The branded slaves after this are returned to their former booths where they await shipment, sometimes 10-15 days... Then they were packed aboard the slave ships, in spaces not much bigger than coffins, chained together in the dark, wet slime of the ship's bottom, choking in the stench of their own excrement. Documents of the time describe the conditions: The height, sometimes, between decks, was only eighteen inches; so that the unfortunate human beings could not turn around, or even on their sides, the elevation being less than the breadth of their shoulders; and here they are usually chained to the decks by the neck and legs. In such a place the sense of misery and suffocation is so great, that the Negroes... are driven to frenzy. On one occasion, hearing a great noise from belowdecks where the blacks were chained together, the sailors opened the hatches and found the slaves in different stages of suffocation, many dead, some having killed others in desperate attempts to breathe. Slaves often jumped overboard to drown rather than continue their suffering. To one observer a slave-deck was "so covered with blood and mucus that it resembled a slaughter house." Under these conditions, perhaps one of every three blacks transported overseas died, but the huge profits (often double the investment on one trip) made it worthwhile for the slave trader, and so the blacks were packed into the holds like fish. First the Dutch, then the English, dominated the slave trade. (By 1795 Liverpool had more than a hundred ships carrying slaves and accounted for half of all the European slave trade.) Some Americans in New England entered the business, and in 1637 the first American slave ship, the Desire, sailed from Marblehead. Its holds were partitioned into racks, 2 feet by 6 feet, with leg irons and bars. By 1800, 10 to 15 million blacks had been transported as slaves to the Americas, representing perhaps one-third of those originally seized in Africa. It is roughly estimated that Africa lost 50 million human beings to death and slavery in those centuries we call the beginnings of modern Western civilization, at the hands of slave traders and plantation owners in Western Europe and America, the countries deemed the most advanced in the world. In the year 1610, a Catholic priest in the Americas named Father Sandoval wrote back to a church functionary in Europe to ask if the capture, transport, and enslavement of African blacks was legal by church doctrine. A letter dated March 12, 1610, from Brother Luis Brandaon to Father Sandoval gives the answer: Your Reverence writes me that you would like to know whether the Negroes who are sent to your parts have been legally captured. To this I reply that I think your Reverence should have no scruples on this point, because this is a matter which has been questioned by the Board of Conscience in Lisbon, and all its members are learned and conscientious men. Nor did the bishops who were in SaoThome, Cape Verde, and here in Loando—all learned and virtuous men—find fault with it. We have been here ourselves for forty years and there have been among us very learned Fathers... never did they consider the trade as illicit. Therefore we and the Fathers of Brazil buy these slaves for our service without any scruple... With all of this—the desperation of the Jamestown settlers for labor, the impossibility of using Indians and the difficulty of using whites, the availability of blacks offered in greater and greater numbers by profit-seeking dealers in human flesh, and with such blacks possible to control because they had just gone through an ordeal which if it did not kill them must have left them in a state of psychic and physical helplessness—is it any wonder that such blacks were ripe for enslavement? And under these conditions, even if some blacks might have been considered servants, would blacks be treated the same as white servants? The evidence, from the court records of colonial Virginia, shows that in 1630 a white man named Hugh Davis was ordered "to be soundly whipt... for abusing himself... by defiling his body in lying with a Negro." Ten years later, six servants and "a negro of Mr. Reynolds" started to run away. While the whites received lighter sentences, "Emanuel the Negro to receive thirty stripes and to be burnt in the cheek with the letter R, and to work in shackle one year or more as his master shall see cause." Although slavery was not yet regularized or legalized in those first years, the lists of servants show blacks listed separately. A law passed in 1639 decreed that "all persons except Negroes" were to get arms and ammunition—probably to fight off Indians. When in 1640 three servants tried to run away, the two whites were punished with a lengthening of their service. But, as the court put it, "the third being a negro named John Punch shall serve his master or his assigns for the time of his natural life." Also in 1640, we have the case of a Negro woman servant who begot a child by Robert Sweat, a white man. The court ruled "that the said negro woman shall be whipt at the whipping post and the said Sweat shall tomorrow in the forenoon do public penance for his offense at James citychurch..." This unequal treatment, this developing combination of contempt and oppression, feeling and action, which we call "racism"—was this the result of a "natural" antipathy of white against black? The question is important, not just as a matter of historical accuracy, but because any emphasis on "natural" racism lightens the responsibility of the social system. If racism can't be shown to be natural, then it is the result of certain conditions, and we are impelled to eliminate those conditions. We have no way of testing the behavior of whites and blacks toward one another under favorable conditions—with no history of subordination, no money incentive for exploitation and enslavement, no desperation for survival requiring forced labor. All the conditions for black and white in seventeenth-century America were the opposite of that, all powerfully directed toward antagonism and mistreatment. Under such conditions even the slightest display of humanity between the races might be considered evidence of a basic human drive toward community. Sometimes it is noted that, even before 1600, when the slave trade had just begun, before Africans were stamped by it—literally and symbolically—the color black was distasteful. In England, before 1600, it meant, according to the Oxford English Dictionary: "Deeply stained with dirt; soiled, dirty, foul. Having dark or deadly purposes, malignant; pertaining to or involving death, deadly; baneful, disastrous, sinister. Foul, iniquitous, atrocious, horribly wicked. Indicating disgrace, censure, liability to punishment, etc." And Elizabethan poetry often used the color white in connection with beauty. It may be that, in the absence of any other overriding factor, darkness and blackness, associated with night and unknown, would take on those meanings. But the presence of another human being is a powerful fact, and the conditions of that presence are crucial in determining whether an initial prejudice, against a mere color, divorced from humankind, is turned into brutality and hatred. In spite of such preconceptions about blackness, in spite of special subordination of blacks in the Americas in the seventeenth century, there is evidence that where whites and blacks found themselves with common problems, common work, common enemy in their master, they behaved toward one another as equals. As one scholar of slavery, Kenneth Stampp, has put it, Negro and white servants of the seventeenth century were "remarkably unconcerned about the visible physical differences." Black and white worked together, fraternized together. The very fact that laws had to be passed after a while to forbid such relations indicates the strength of that tendency. In 1661 a law was passed in Virginia that "in case any English servant shall run away in company of any Negroes" he would have to give special service for extra years to the master of the runaway Negro. In 1691, Virginia provided for the banishment of any "white man or woman being free who shall intermarry with a negro, mulatoo, or Indian man or woman bond or free." There is an enormous difference between a feeling of racial strangeness, perhaps fear, and the mass enslavement of millions of black people that took place in the Americas. The transition from one to the other cannot be explained easily by "natural" tendencies. It is not hard to understand as the outcome of historical conditions. Slavery grew as the plantation system grew. The reason is easily traceable to something other than natural racial repugnance: the number of arriving whites, whether free or indentured servants (under four to seven years contract), was not enough to meet the need of the plantations. By 1700, in Virginia, there were 6,000 slaves, one-twelfth of the population. By 1763, there were 170,000 slaves, about half the population. Blacks were easier to enslave than whites or Indians. But they were still not easy to enslave. From the beginning, the imported black men and women resisted their enslavement. Ultimately their resistance was controlled, and slavery was established for 3 million blacks in the South. Still, under the most difficult conditions, under pain of mutilation and death, throughout their two hundred years of enslavement in North America, these Afro-Americans continued to rebel. Only occasionally was there an organized insurrection. More often they showed their refusal to submit by running away. Even more often, they engaged in sabotage, slowdowns, and subtle forms of resistance which asserted, if only to themselves and their brothers and sisters, their dignity as human beings. The refusal began in Africa. One slave trader reported that Negroes were "so wilful and loth to leave their own country, that they have often leap'd out of the canoes, boat and ship into the sea, and kept under water til they were drowned." When the very first black slaves were brought into Hispaniola in 1503, the Spanish governor of Hispaniola complained to the Spanish court that fugitive Negro slaves were teaching disobedience to the Indians. In the 1520s and 1530s, there were slave revolts in Hispaniola, Puerto Rico, Santa Marta, and what is now Panama. Shortly after those rebellions, the Spanish established a special police for chasing fugitive slaves. A Virginia statute of 1669 referred to "the obstinacy of many of them," and in 1680 the Assembly took note of slave meetings "under the pretense of feasts and brawls" which they considered of "dangerous consequence." In 1687, in the colony's Northern Neck, a plot was discovered in which slaves planned to kill all the whites in the area and escape during a mass funeral. Gerald Mullin, who studied slave resistance in eighteenth-century Virginia in his work Flight and Rebellion, reports: The available sources on slavery in 18th-century Virginia—plantation and county records, the newspaper advertisements for runaways—describe rebellious slaves and few others. The slaves described were lazy and thieving; they feigned illnesses, destroyed crops, stores, tools, and sometimes attacked or killed overseers. They operated blackmarkets in stolen goods. Runaways were defined as various types, they were truants (who usually returned voluntarily), "outlaws"... and slaves who were actually fugitives: men who visited relatives, went to town to pass as free, or tried to escape slavery completely, either by boarding ships and leaving the colony, or banding together in cooperative efforts to establish villages or hide-outs in the frontier. The commitment of another type of rebellious slave was total; these men became killers, arsonists, and insurrectionists. Slaves recently from Africa, still holding on to the heritage of their communal society, would run away in groups and try to establish villages of runaways out in the wilderness, on the frontier. Slaves born in America, on the other hand, were more likely to run off alone, and, with the skills they had learned on the plantation, try to pass as free men. In the colonial papers of England, a 1729 report from the lieutenant governor of Virginia to the British Board of Trade tells how "a number of Negroes, about fifteen... formed a design to withdraw from their Master and to fix themselves in the fastnesses of the neighboring Mountains. They had found means to get into their possession some Arms and Ammunition, and they took along with them some Provisions, their Cloths, bedding and working Tools... Tho' this attempt has happily been defeated, it ought nevertheless to awaken us into some effectual measures..." Slavery was immensely profitable to some masters. James Madison told a British visitor shortly after the American Revolution that he could make $257 on every Negro in a year, and spend only $12 or $13 on his keep. Another viewpoint was of slaveowner Landon Carter, writing about fifty years earlier, complaining that his slaves so neglected their work and were so uncooperative ("either cannot or will not work") that he began to wonder if keeping them was worthwhile. Some historians have painted a picture—based on the infrequency of organized rebellions and the ability of the South to maintain slavery for two hundred years—of a slave population made submissive by their condition; with their African heritage destroyed, they were, as Stanley Elkins said, made into "Sambos," "a society of helpless dependents." Or as another historian, Ulrich Phillips, said, "by racial quality submissive." But looking at the totality of slave behavior, at the resistance of everyday life, from quiet noncooperation in work to running away, the picture becomes different. In 1710, warning the Virginia Assembly, Governor Alexander Spotswood said: ...freedom wears a cap which can without a tongue, call together all those who long to shake off the fetters of slavery and as such an Insurrection would surely be attended with most dreadful consequences so I we cannot be too early in providing against it, both by putting our selves in a better posture of defence and by making a law to prevent the consultations of those Negroes. Indeed, considering the harshness of punishment for running away, that so many blacks did run away must be a sign of a powerful rebelliousness. All through the 1700s, the Virginia slave code read: Whereas many times slaves run away and lie hid and lurking in swamps, woods, and other obscure places, killing hogs, and commiting other injuries to the inhabitants... if the slave does not immediately return, anyone whatsoever may kill or destroy such slaves by such ways and means as he... shall think fit... If the slave is apprehended... it shall... be lawful for the county court, to order such punishment for the said slave, either by dismembering, or in any other way... as they in their discretion shall think fit, for the reclaiming any such incorrigible slave, and terrifying others from the like practices... Mullin found newspaper advertisements between 1736 and 1801 for 1,138 men runaways, and 141 women. One consistent reason for running away was to find members of one's family—showing that despite the attempts of the slave system to destroy family ties by not allowing marriages and by separating families, slaves would face death and mutilation to get together. In Maryland, where slaves were about one-third of the population in 1750, slavery had been written into law since the 1660s, and statutes for controlling rebellious slaves were passed. There were cases where slave women killed their masters, sometimes by poisoning them, sometimes by burning tobacco houses and homes. Punishment ranged from whipping and branding to execution, but the trouble continued. In 1742, seven slaves were put to death for murdering their master. Fear of slave revolt seems to have been a permanent fact of plantation life. William Byrd, a wealthy Virginia slaveowner, wrote in 1736: We have already at least 10,000 men of these descendants of Ham, fit to bear arms, and these numbers increase every day, as well by birth as by importation. And in case there should arise a man of desperate fortune, he might with more advantage than Cataline kindle a servile war... and tinge our rivers wide as they are with blood. It was an intricate and powerful system of control that the slaveowners developed to maintain their labor supply and their way of life, a system both subtle and crude, involving every device that social orders employ for keeping power and wealth where it is. As Kenneth Stampp puts it: A wise master did not take seriously the belief that Negroes were natural-born slaves. He knew better. He knew that Negroes freshly imported from Africa had to be broken into bondage; that each succeeding generation had to be carefully trained. This was no easy task, for the bondsman rarely submitted willingly. Moreover, he rarely submitted completely. In most cases there was no end to the need for control—at least not until old age reduced the slave to a condition of helplessness. The system was psychological and physical at the same time. The slaves were taught discipline, were impressed again and again with the idea of their own inferiority to "know their place," to see blackness as a sign of subordination, to be awed by the power of the master, to merge their interest with the master's, destroying their own individual needs. To accomplish this there was the discipline of hard labor, the breakup of the slave family, the lulling effects of religion (which sometimes led to "great mischief," as one slaveholder reported), the creation of disunity among slaves by separating them into field slaves and more privileged house slaves, and finally the power of law and the immediate power of the overseer to invoke whipping, burning, mutilation, and death. Dismemberment was provided for in the Virginia Code of 1705. Maryland passed a law in 1723 providing for cutting off the ears of blacks who struck whites, and that for certain serious crimes, slaves should be hanged and the body quartered and exposed. Still, rebellions took place—not many, but enough to create constant fear among white planters. The first large-scale revolt in the North American colonies took place in New York in 1712. In New York, slaves were 10 percent of the population, the highest proportion in the northern states, where economic conditions usually did not require large numbers of field slaves. About twenty- five blacks and two Indians set fire to a building, then killed nine whites who came on the scene. They were captured by soldiers, put on trial, and twenty-one were executed. The governor's report to England said: "Some were burnt, others were hanged, one broke on the wheel, and one hung alive in chains in the town..." One had been burned over a slow fire for eight to ten hours—all this to serve notice to other slaves. A letter to London from South Carolina in 1720 reports: I am now to acquaint you that very lately we have had a very wicked and barbarous plot of the designe of the negroes rising with a designe to destroy all the white people in the country and then to take Charles Town in full body but it pleased God it was discovered and many of them taken prisoners and some burnt and some hang'd and some banish'd. Around this time there were a number of fires in Boston and New Haven, suspected to be the work of Negro slaves. As a result, one Negro was executed in Boston, and the Boston Council ruled that any slaves who on their own gathered in groups of two or more were to be punished by whipping. At Stono, South Carolina, in 1739, about twenty slaves rebelled, killed two warehouse guards, stole guns and gunpowder, and headed south, killing people in their way, and burning buildings. They were joined by others, until there were perhaps eighty slaves in all and, according to one account of the time, "they called out Liberty, marched on with Colours displayed, and two Drums beating." The militia found and attacked them. In the ensuing battle perhaps fifty slaves and twenty-five whites were killed before the uprising was crushed. Herbert Aptheker, who did detailed research on slave resistance in North America for his book American Negro Slave Revolts, found about 250 instances where a minimum of ten slaves joined in a revolt or conspiracy. From time to time, whites were involved in the slave resistance. As early as 1663, indentured white servants and black slaves in Gloucester County, Virginia, formed a conspiracy to rebel and gain their freedom. The plot was betrayed, and ended with executions. Mullin reports that the newspaper notices of runaways in Virginia often warned "ill-disposed" whites about harboring fugitives. Sometimes slaves and free men ran off together, or cooperated in crimes together. Sometimes, black male slaves ran off and joined white women. From time to time, white ship captains and watermen dealt with runaways, perhaps making the slave a part of the crew. In New York in 1741, there were ten thousand whites in the city and two thousand black slaves. It had been a hard winter and the poor—slave and free—had suffered greatly. When mysterious fires broke out, blacks and whites were accused of conspiring together. Mass hysteria developed against the accused. After a trial full of lurid accusations by informers, and forced confessions, two white men and two white women were executed, eighteen slaves were hanged, and thirteen slaves were burned alive. Only one fear was greater than the fear of black rebellion in the new American colonies. That was the fear that discontented whites would join black slaves to overthrow the existing order. In the early years of slavery, especially, before racism as a way of thinking was firmly ingrained, while white indentured servants were often treated as badly as black slaves, there was a possibility of cooperation. As Edmund Morgan sees it: There are hints that the two despised groups initially saw each other as sharing the same predicament. It was common, for example, for servants and slaves to run away together, steal hogs together, get drunk together. It was not uncommon for them to make love together. In Bacon's Rebellion, one of the last groups to surrender was a mixed band of eighty negroes and twenty English servants. As Morgan says, masters, "initially at least, perceived slaves in much the same way they had always perceived servants... shiftless, irresponsible, unfaithful, ungrateful, dishonest..." And "if freemen with disappointed hopes should make common cause with slaves of desperate hope, the results might be worse than anything Bacon had done." And so, measures were taken. About the same time that slave codes, involving discipline and punishment, were passed by the Virginia Assembly, Virginia's ruling class, having proclaimed that all white men were superior to black, went on to offer their social (but white) inferiors a number of benefits previously denied them. In 1705 a law was passed requiring masters to provide white servants whose indenture time was up with ten bushels of corn, thirty shillings, and a gun, while women servants were to get 15 bushels of corn and forty shillings. Also, the newly freed servants were to get 50 acres of land. Morgan concludes: "Once the small planter felt less exploited by taxation and began to prosper a little, he became less turbulent, less dangerous, more respectable. He could begin to see his big neighbor not as an extortionist but as a powerful protector of their common interests." We see now a complex web of historical threads to ensnare blacks for slavery in America: the desperation of starving settlers, the special helplessness of the displaced African, the powerful incentive of profit for slave trader and planter, the temptation of superior status for poor whites, the elaborate controls against escape and rebellion, the legal and social punishment of black and white collaboration. The point is that the elements of this web are historical, not "natural." This does not mean that they are easily disentangled, dismantled. It means only that there is a possibility for something else, under historical conditions not yet realized. And one of these conditions would be the elimination of that class exploitation which has made poor whites desperate for small gifts of status, and has prevented that unity of black and white necessary for joint rebellion and reconstruction. Around 1700, the Virginia House of Burgesses declared: The Christian Servants in this country for the most part consists of the Worser Sort of the people of Europe. And since... such numbers of Irish and other Nations have been brought in of which a great many have been soldiers in the late warrs that according to our present Circumstances we can hardly governe them and if they were fitted with Armes and had the Opertunity of meeting together by Musters we have just reason to fears they may rise upon us. It was a kind of class consciousness, a class fear. There were things happening in early Virginia, and in the other colonies, to warrant it.
http://www.historyisaweapon.org/defcon1/zinncolorline.html
4.28125
West Nile virus |West Nile fever| |Classification and external resources| |Patient UK||West Nile virus| West Nile virus (WNV) is a mosquito-borne zoonotic arbovirus belonging to the genus Flavivirus in the family Flaviviridae. It is found in temperate and tropical regions of the world. It was first identified in the West Nile subregion in the East African nation of Uganda in 1937. Prior to the mid-1990s, WNV disease occurred only sporadically and was considered a minor risk for humans, until an outbreak in Algeria in 1994, with cases of WNV-caused encephalitis, and the first large outbreak in Romania in 1996, with a high number of cases with neuroinvasive disease. WNV has now spread globally, with the first case in the Western Hemisphere being identified in New York City in 1999; over the next five years, the virus spread across the continental United States, north into Canada, and southward into the Caribbean islands and Latin America. WNV also spread to Europe, beyond the Mediterranean Basin, and a new strain of the virus was identified in Italy in 2012. WNV spreads on an ongoing basis in Africa, Asia, Australia, the Middle East, Europe, Canada and in the United States. In 2012 the US experienced one of its worst epidemics in which 286 people died, with the state of Texas being hard hit by this virus. The main mode of WNV transmission is via various species of mosquitoes, which are the prime vector, with birds being the most commonly infected animal and serving as the prime reservoir host—especially passerines, which are of the largest order of birds, Passeriformes. WNV has been found in various species of ticks, but current research suggests they are not important vectors of the virus. WNV also infects various mammal species, including humans, and has been identified in reptilian species, including alligators and crocodiles, and also in amphibians. Not all animal species that are susceptible to WNV infection, including humans, and not all bird species develop sufficient viral levels to transmit the disease to uninfected mosquitoes, and are thus not considered major factors in WNV transmission. Approximately 80% of West Nile virus infections in humans are subclinical, which cause no symptoms. In the cases where symptoms do occur—termed West Nile fever in cases without neurological disease—the time from infection to the appearance of symptoms (incubation period) is typically between 2 and 15 days. Symptoms may include fever, headaches, fatigue, muscle pain or aches (myalgias), malaise, nausea, anorexia, vomiting, and rash. Less than 1% of the cases are severe and result in neurological disease when the central nervous system is affected. People of advanced age, the very young, or those with immunosuppression, either medically induced, such as those taking immunosupressive drugs, or due to a pre-existing medical condition such as HIV infection, are most susceptible. The specific neurological diseases that may occur are West Nile encephalitis, which causes inflammation of the brain, West Nile meningitis, which causes inflammation of the meninges, which are the protective membranes that cover the brain and spinal cord, West Nile meningoencephalitis, which causes inflammation of the brain and also the meninges surrounding it, and West Nile poliomyelitis—spinal cord inflammation, which results in a syndrome similar to polio, which may cause acute flaccid paralysis. Currently, no vaccine against WNV infection is available. The best method to reduce the rates of WNV infection is mosquito control on the part of municipalities, businesses and individual citizens to reduce breeding populations of mosquitoes in public, commercial and private areas via various means including eliminating standing pools of water where mosquitoes breed, such as in old tires, buckets, and unused swimming pools. On an individual basis, the use of personal protective measures to avoid being bitten by an infected mosquito, via the use of mosquito repellent, window screens, avoiding areas where mosquitoes are more prone to congregate, such as near marshes, and areas with heavy vegetation, and being more vigilant from dusk to dawn when mosquitoes are most active offers the best defense. In the event of being bitten by an infected mosquito, familiarity of the symptoms of WNV on the part of laypersons, physicians and allied health professions affords the best chance of receiving timely medical treatment, which may aid in reducing associated possible complications and also appropriate palliative care. Signs and symptoms The incubation period for WNV—the amount of time from infection to symptom onset—is typically from between 2 and 15 days. Headache can be a prominent symptom of WNV fever, meningitis, encephalitis, meningoencephalitis, and it may or may not be present in poliomyelitis-like syndrome. Thus, headache is not a useful indicator of neuroinvasive disease. - West Nile fever (WNF), which occurs in 20 percent of cases, is a febrile syndrome that causes flu-like symptoms. Most characterizations of WNF generally describe it as a mild, acute syndrome lasting 3 to 6 days after symptom onset. Systematic follow-up studies of patients with WNF have not been done, so this information is largely anecdotal. In addition to a high fever, headache, chills, excessive sweating, weakness, fatigue, swollen lymph nodes, drowsiness, pain in the joints and flu-like symptoms. Gastrointestinal symptoms that may occur include nausea, vomiting, loss of appetite, and diarrhea. Fewer than one-third of patients develop a rash. - West Nile neuroinvasive disease (WNND), which occurs in less than 1 percent of cases, is when the virus infects the central nervous system resulting in meningitis, encephalitis, meningoencephalitis or a poliomyelitis-like syndrome. Many patients with WNND have normal neuroimaging studies, although abnormalities may be present in various cerebral areas including the basal ganglia, thalamus, cerebellum, and brainstem. - West Nile virus encephalitis (WNE) is the most common neuroinvasive manifestation of WNND. WNE presents with similar symptoms to other viral encephalitis with fever, headaches, and altered mental status. A prominent finding in WNE is muscular weakness (30 to 50 percent of patients with encephalitis), often with lower motor neuron symptoms, flaccid paralysis, and hyporeflexia with no sensory abnormalities. - West Nile meningitis (WNM) usually involves fever, headache, and stiff neck. Pleocytosis, an increase of white blood cells in cerebrospinal fluid, is also present. Changes in consciousness are not usually seen and are mild when present. - West Nile meningoencephalitis is inflammation of both the brain (encephalitis) and meninges (meningitis). - West Nile poliomyelitis (WNP), an acute flaccid paralysis syndrome associated with WNV infection, is less common than WNM or WNE. This syndrome is generally characterized by the acute onset of asymmetric limb weakness or paralysis in the absence of sensory loss. Pain sometimes precedes the paralysis. The paralysis can occur in the absence of fever, headache, or other common symptoms associated with WNV infection. Involvement of respiratory muscles, leading to acute respiratory failure, can sometimes occur. - West-Nile reversible paralysis, Like WNP, the weakness or paralysis is asymmetric. Reported cases have been noted to have an initial preservation of deep tendon reflexes, which is not expected for a pure anterior horn involvement. Disconnect of upper motor neuron influences on the anterior horn cells possibly by myelitis or glutamate excitotoxicity have been suggested as mechanisms. The prognosis for recovery is excellent. - Nonneurologic complications of WNV infection that may rarely occur include fulminant hepatitis, pancreatitis, myocarditis, rhabdomyolysis, orchitis, nephritis, optic neuritis and cardiac dysrhythmias and hemorrhagic fever with coagulopathy. Chorioretinitis may also be more common than previously thought. - Cutaneous manifestations specifically rashes, are not uncommon in WNV-infected patients; however, there is a paucity of detailed descriptions in case reports and there are few clinical images widely available. Punctate erythematous, macular, and papular eruptions, most pronounced on the extremities have been observed in WNV cases and in some cases histopathologic findings have shown a sparse superficial perivascular lymphocytic infiltrate, a manifestation commonly seen in viral exanthems. A literature review provides support that this punctate rash is a common cutaneous presentation of WNV infection. |West Nile Virus| |Group:||Group IV ((+)ssRNA)| |Species:||West Nile virus| WNV is one of the Japanese encephalitis antigenic serocomplex of viruses. Image reconstructions and cryoelectron microscopy reveal a 45–50 nm virion covered with a relatively smooth protein surface. This structure is similar to the dengue fever virus; both belong to the genus Flavivirus within the family Flaviviridae. The genetic material of WNV is a positive-sense, single strand of RNA, which is between 11,000 and 12,000 nucleotides long; these genes encode seven nonstructural proteins and three structural proteins. The RNA strand is held within a nucleocapsid formed from 12-kDa protein blocks; the capsid is contained within a host-derived membrane altered by two viral glycoproteins. Studies of phylogenetic lineages determined WNV emerged as a distinct virus around 1000 years ago. This initial virus developed into two distinct lineages, lineage 1 and its multiple profiles is the source of the epidemic transmission in Africa and throughout the world. Lineage 2 was considered an Africa zoonosis. However, in 2008, lineage 2, previously only seen in horses in sub-Saharan Africa and Madagascar, began to appear in horses in Europe, where the first known outbreak affected 18 animals in Hungary in 2008. Lineage 1 West Nile virus was detected in South Africa in 2010 in a mare and her aborted fetus; previously, only lineage 2 West Nile virus had been detected in horses and humans in South Africa. A 2007 fatal case in a killer whale in Texas broadened the known host range of West Nile virus to include cetaceans. The United States virus was very closely related to a lineage 1 strain found in Israel in 1998. Since the first North American cases in 1999, the virus has been reported throughout the United States, Canada, Mexico, the Caribbean, and Central America. There have been human cases and equine cases, and many birds are infected. The Barbary macaque, Macaca sylvanus, was the first nonhuman primate to contract WNV. Both the United States and Israeli strains are marked by high mortality rates in infected avian populations; the presence of dead birds—especially Corvidae—can be an early indicator of the arrival of the virus. West Nile virus (WNV) is transmitted through female mosquitoes, which are the prime vectors of the virus. Only females feed on blood, and different species take a blood meal's from different types of vertebrate hosts. The important mosquito vectors vary according to geographical area; in the United States, Culex pipiens (Eastern United States, and urban and residential areas of the United States north of 36–39°N), Culex tarsalis (Midwest and West), and Culex quinquefasciatus (Southeast) are the main vector species. The mosquito species that are most frequently infected with WNV feed primarily on birds. Mosquitoes show further selectivity, exhibiting preference for different species of birds. In the United States, WNV mosquito vectors feed on members of the Corvidae and thrush family more often that would be expected from their abundance,. Among the preferred species within these families are the American crow, a corvid, and the American robin (Turdus migratorius), a thrush. Some species of birds develop sufficient viral levels (>~104.2 log PFU/ml;) after being infected to transmit the infection to biting mosquitoes that in turn go on to infect other birds. In birds that die from WNV, death usually occurs after 4 to 6 days. In mammals, and several species of birds the virus does not multiply as readily (i.e., does not develop high viremia during infection), and mosquitoes biting infected these hosts are not believed to ingest sufficient virus to become infected, making them so-called dead-end hosts. As a result of the differential infectiousness of hosts, the feeding patterns of mosquitoes play an important role in WNV transmission, and they are partly genetically controlled, even within a species. Direct human-to-human transmission initially was believed to be caused only by occupational exposure, such as in a laboratory setting, or conjunctive exposure to infected blood. The US outbreak identified additional transmission methods through blood transfusion, organ transplant, intrauterine exposure, and breast feeding. Since 2003, blood banks in the United States routinely screen for the virus among their donors. As a precautionary measure, the UK's National Blood Service initially ran a test for this disease in donors who donate within 28 days of a visit to the United States, Canada, or the northeastern provinces of Italy, and the Scottish National Blood Transfusion Service asks prospective donors to wait 28 days after returning from North America or the northeastern provinces of Italy before donating. Recently, the potential for mosquito saliva to affect the course of WNV disease was demonstrated. Mosquitoes inoculate their saliva into the skin while obtaining blood. Mosquito saliva is a pharmacological cocktail of secreted molecules, principally proteins, that can affect vascular constriction, blood coagulation, platelet aggregation, inflammation, and immunity. It clearly alters the immune response in a manner that may be advantageous to a virus. Studies have shown it can specifically modulate the immune response during early virus infection, and mosquito feeding can exacerbate WNV infection, leading to higher viremia and more severe forms of disease. Vertical transmission, the transmission of a viral or bacterial disease from the female of the species to her offspring, has been observed in various West Nile virus studies, amongst different species of mosquitoes in both the laboratory and in nature. Mosquito progeny infected vertically in autumn, may potentially serve as a mechanism for WNV to overwinter and initiate enzootic horizontal transmission the following spring, although it likely plays little role in transmission in the summer and fall. Risk factors independently associated with developing a clinical infection with WNV include a suppressed immune system and a patient history of organ transplantation. For neuroinvasive disease the additional risk factors include older age (>50+), male sex, hypertension, and diabetes mellitus. A genetic factor also appears to increase susceptibility to West Nile disease. A mutation of the gene CCR5 gives some protection against HIV but leads to more serious complications of WNV infection. Carriers of two mutated copies of CCR5 made up 4.0 to 4.5% of a sample of West Nile disease sufferers, while the incidence of the gene in the general population is only 1.0%. Preliminary diagnosis is often based on the patient's clinical symptoms, places and dates of travel (if patient is from a nonendemic country or area), activities, and epidemiologic history of the location where infection occurred. A recent history of mosquito bites and an acute febrile illness associated with neurologic signs and symptoms should cause clinical suspicion of WNV. Diagnosis of West Nile virus infections is generally accomplished by serologic testing of blood serum or cerebrospinal fluid (CSF), which is obtained via a lumbar puncture. Typical findings of WNV infection include lymphocytic pleocytosis, elevated protein level, reference glucose and lactic acid levels, and no erythrocytes. Definitive diagnosis of WNV is obtained through detection of virus-specific antibody IgM and neutralizing antibodies. Cases of West Nile virus meningitis and encephalitis that have been serologically confirmed produce similar degrees of CSF pleocytosis and are often associated with substantial CSF neutrophilia. Specimens collected within eight days following onset of illness may not test positive for West Nile IgM, and testing should be repeated. A positive test for West Nile IgG in the absence of a positive West Nile IgM is indicative of a previous flavavirus infection and is not by itself evidence of an acute West Nile virus infection. If cases of suspected West Nile virus infection, sera should be collected on both the acute and convalescent phases of the illness. Convalescent specimens should be collected 2–3 weeks after acute specimens. It is common in serologic testing for cross-reactions to occur among flaviviruses such as dengue virus (DENV) and tick-borne encephalitis virus; this necessitates caution when evaluating serologic results of flaviviral infections. Four FDA-cleared WNV IgM ELISA kits are commercially available from different manufacturers in the U.S., each of these kits is indicated for use on serum to aid in the presumptive laboratory diagnosis of WNV infection in patients with clinical symptoms of meningitis or encephalitis. Positive WNV test kits obtained via use of these kits should be confirmed by additional testing at a state health department laboratory or CDC. In fatal cases, nucleic acid amplification, histopathology with immunohistochemistry, and virus culture of autopsy tissues can also be useful. Only a few state laboratories or other specialized laboratories, including those at CDC, are capable of doing this specialized testing. A number of various diseases may present with symptoms similar to those caused by a clinical West Nile virus infection. Those causing neuroinvasive disease symptoms include the enterovirus infection and bacterial meningitis. Accounting for differential diagnoses is a crucial step in the definitive diagnosis of WNV infection. Consideration of a differential diagnosis is required when a patient presents with unexplained febrile illness, extreme headache, encephalitis or meningitis. Diagnostic and serologic laboratory testing using polymerase chain reaction (PCR) testing and viral culture of CSF to identify the specific pathogen causing the symptoms, is the only currently available means of differentiating between causes of encephalitis and meningitis. Personal protective measures can be taken to greatly reduce the risk of being bitten by an infected mosquito: - Using insect repellent on exposed skin to repel mosquitoes. EPA-registered repellents include products containing DEET (N,N-diethylmetatoluamide) and picaridin (KBR 3023). DEET concentrations of 30% to 50% are effective for several hours. Picaridin, available at 7% and 15% concentrations, needs more frequent application. DEET formulations as high as 50% are recommended for both adults and children over two months of age. Protect infants less than two months of age by using a carrier draped with mosquito netting with an elastic edge for a tight fit. - When using sunscreen, apply sunscreen first and then repellent. Repellent should be washed off at the end of the day before going to bed. - Wear long-sleeve shirts, which should be tucked in, long pants, socks, and hats to cover exposed skin. Insect repellents should be applied over top of protective clothing for greater protection. Do not apply insect repellents underneath clothing. - The application of permethrin-containing (e.g., Permanone) or other insect repellents to clothing, shoes, tents, mosquito nets, and other gear for greater protection. Permethrin is not labeled for use directly on skin. Most repellent is generally removed from clothing and gear by a single washing, but permethrin-treated clothing is effective for up to five washings. - Be aware that most mosquitoes that transmit disease are most active during twilight periods (dawn and dusk or in the evening). A notable exception is the Asian tiger mosquito, which is a daytime feeder and is more apt to be found in, or on the periphery of, shaded areas with heavy vegetation. They are now widespread in the United States, and in Florida they have been found in all 67 counties. - Staying in air-conditioned or well-screened housing, and/or sleeping under an insecticide-treated bed net. Bed nets should be tucked under mattresses and can be sprayed with a repellent if not already treated with an insecticide. Monitoring and control West Nile virus can be sampled from the environment by the pooling of trapped mosquitoes via ovitraps, carbon dioxide-baited light traps, and gravid traps, testing blood samples drawn from wild birds, dogs, and sentinel monkeys, as well as testing brains of dead birds found by various animal control agencies and the public. Testing of the mosquito samples requires the use of reverse-transcriptase PCR (RT-PCR) to directly amplify and show the presence of virus in the submitted samples. When using the blood sera of wild birds and sentinel chickens, samples must be tested for the presence of WNV antibodies by use of immunohistochemistry (IHC) or enzyme-linked immunosorbent assay (ELISA). Dead birds, after necropsy, or their oral swab samples collected on specific RNA-preserving filter paper card, can have their virus presence tested by either RT-PCR or IHC, where virus shows up as brown-stained tissue because of a substrate-enzyme reaction. West Nile control is achieved through mosquito control, by elimination of mosquito breeding sites such as abandoned pools, applying larvacide to active breeding areas, and targeting the adult population via lethal ovitraps and aerial spraying of pesticides. Environmentalists have condemned attempts to control the transmitting mosquitoes by spraying pesticide, saying the detrimental health effects of spraying outweigh the relatively few lives that may be saved, and more environmentally friendly ways of controlling mosquitoes are available. They also question the effectiveness of insecticide spraying, as they believe mosquitoes that are resting or flying above the level of spraying will not be killed; the most common vector in the northeastern United States, Culex pipiens, is a canopy feeder. Eggs of permanent water mosquitoes can hatch, and the larvae survive, in only a few ounces of water. Less than half the amount that may collect in a discarded coffee cup. Floodwater species lay their eggs on wet soil or other moist surfaces. Hatch time is variable for both types; under favorable circumstances, i.e. warm weather, the eggs of some species may hatch in as few as 1–3 days after being laid. Used tires often hold stagnant water and are a breeding ground for many species of mosquitoes. Some species such as the Asian tiger mosquito prefer manmade containers, such as tires, in which to lay their eggs. The rapid spread of this aggressive daytime feeding species beyond their native range has been attributed to the used tire trade. No specific treatment is available for WNV infection. In severe cases treatment consists of supportive care that often involves hospitalization, intravenous fluids, respiratory support, and prevention of secondary infections. While the general prognosis is favorable, current studies indicate that West Nile Fever can often be more severe than previously recognized, with studies of various recent outbreaks indicating that it may take as long as 60–90 days to recover. People with milder WNF are just as likely as those with more severe manifestations of neuroinvasive disease to experience multiple long term (>1+ years) somatic complaints such as tremor, and dysfunction in motor skills and executive functions. People with milder illness are just as likely as people with more severe illness to experience adverse outcomes. Recovery is marked by a long convalescence with fatigue. One study found that neuroinvasive WNV infection was associated with an increased risk for subsequent kidney disease. WNV was first isolated from a feverish 37-year-old woman at Omogo in the West Nile District of Uganda in 1937 during research on yellow fever virus. A series of serosurveys in 1939 in central Africa found anti-WNV positive results ranging from 1.4% (Congo) to 46.4% (White Nile region, Sudan). It was subsequently identified in Egypt (1942) and India (1953), a 1950 serosurvey in Egypt found 90% of those over 40 years in age had WNV antibodies. The ecology was characterized in 1953 with studies in Egypt and Israel. The virus became recognized as a cause of severe human meningoencephalitis in elderly patients during an outbreak in Israel in 1957. The disease was first noted in horses in Egypt and France in the early 1960s and found to be widespread in southern Europe, southwest Asia and Australia. The first appearance of WNV in the Western Hemisphere was in 1999 with encephalitis reported in humans, dogs, cats, and horses, and the subsequent spread in the United States may be an important milestone in the evolving history of this virus. The American outbreak began in College Point, Queens in New York City and was later spread to the neighboring states of New Jersey and Connecticut. The virus is believed to have entered in an infected bird or mosquito, although there is no clear evidence. West Nile virus is now endemic in Africa, Europe, the Middle East, west and central Asia, Oceania (subtype Kunjin), and most recently, North America and is spreading into Central and South America. Recent outbreaks of West Nile virus encephalitis in humans have occurred in Algeria (1994), Romania (1996 to 1997), the Czech Republic (1997), Congo (1998), Russia (1999), the United States (1999 to 2009), Canada (1999–2007), Israel (2000) and Greece (2010). Outdoor workers (including biological fieldworkers, construction workers, farmers, landscapers, and painters), healthcare personnel, and laboratory personnel who perform necropsies on animals are at risk of contracting WNV. A vaccine for horses (ATCvet code: QI05) based on killed viruses exists; some zoos have given this vaccine to their birds, although its effectiveness is unknown. Dogs and cats show few if any signs of infection. There have been no known cases of direct canine-human or feline-human transmission; although these pets can become infected, it is unlikely they are, in turn, capable of infecting native mosquitoes and thus continuing the disease cycle. AMD3100, which had been proposed as an antiretroviral drug for HIV, has shown promise against West Nile encephalitis. Morpholino antisense oligos conjugated to cell penetrating peptides have been shown to partially protect mice from WNV disease. There have also been attempts to treat infections using ribavirin, intravenous immunoglobulin, or alpha interferon. GenoMed, a U.S. biotech company, has found that blocking angiotensin II can treat the "cytokine storm" of West Nile virus encephalitis as well as other viruses. - Nash D, Mostashari F, Fine A, et al. (June 2001). "The outbreak of West Nile virus infection in the New York City area in 1999". N. Engl. J. Med. 344 (24): 1807–14. doi:10.1056/NEJM200106143442401. PMID 11407341. - Barzon L, Pacenti M, Franchin E, Lavezzo E, Martello T, Squarzon L, Toppo S, Fiorin F, Marchiori G, Russo F, Cattai M, Cusinato R, Palu G (2012). "New endemic West Nile virus lineage 1a in northern Italy, July 2012". Euro Surveillance : Bulletin Européen Sur Les Maladies Transmissibles = European Communicable Disease Bulletin 17 (31). PMID 22874456. Retrieved 2014-12-08. - Chen, Chen C.; Jenkins, Emily; Epp, Tasha; Waldner, Cheryl; Curry, Philip S.; Soos, Catherine (2013-07-22). "Climate Change and West Nile Virus in a Highly Endemic Region of North America". International Journal of Environmental Research and Public Health 10 (7): 3052–3071. doi:10.3390/ijerph10073052. PMC 3734476. PMID 23880729. - Murray KO, Ruktanonchai D, Hesalroad D, Fonken E, Nolan MS (November 2013). "West Nile virus, Texas, USA, 2012". Emerging Infectious Diseases 19 (11): 1836–8. doi:10.3201/eid1911.130768. PMC 3837649. PMID 24210089. Retrieved 2014-12-08. - Fox, M. (May 13, 2013). "2012 was deadliest year for West Nile in US, CDC says". NBC News. Retrieved May 13, 2013. - Steinman, A.; Banet-Noach, C.; Tal, S.; Levi, O.; Simanov, L.; Perk, S.; Malkinson, M.; Shpigel, N. (2003). "West Nile Virus Infection in Crocodiles". Emerging Infectious Diseases 9 (7): 887–889. doi:10.3201/eid0907.020816. PMC 3023443. PMID 12899140. - Klenk, K.; Snow, J.; Morgan, K.; Bowen, R.; Stephens, M.; Foster, F.; Gordy, P.; Beckett, S.; Komar, N.; Gubler, D.; Bunning, M. (2004). "Alligators as West Nile Virus Amplifiers". Emerging Infectious Diseases 10 (12): 2150–2155. doi:10.3201/eid1012.040264. PMC 3323409. PMID 15663852. - "West Nile Virus: What You Need to Know CDC Fact Sheet". www.CDC.gov. Retrieved 2012-04-09. - Olejnik E (1952). "Infectious adenitis transmitted by Culex molestus". Bull Res Counc Isr 2: 210–1. - Davis LE, DeBiasi R, Goade DE, et al. (Sep 2006). "West Nile virus neuroinvasive disease". Ann Neurol 60 (3): 286–300. doi:10.1002/ana.20959. PMID 16983682. - Flores Anticona EM, Zainah H, Ouellette DR, Johnson LE (2012). "Two case reports of neuroinvasive west nile virus infection in the critical care unit". Case Rep Infect Dis 2012: 839458. doi:10.1155/2012/839458. PMC 3433121. PMID 22966470. - Carson PJ, Konewko P, Wold KS, et al. (2006). "Long-term clinical and neuropsychological outcomes of West Nile virus infection". Clin. Infect. Dis. 43 (6): 723–30. doi:10.1086/506939. PMID 16912946. - Mojumder, D. K., Agosto, M., Wilms, H.; et al. (March 2014). "Is initial preservation of deep tendon reflexes in West Nile Virus paralysis a good prognostic sign?". Neurology Asia 19 (1): 93–97. PMC 4229851. PMID 25400704. - Asnis DS, Conetta R, Teixeira AA, Waldman G, Sampson BA (March 2000). "The West Nile Virus outbreak of 1999 in New York: the Flushing Hospital experience". Clin. Infect. Dis. 30 (3): 413–8. doi:10.1086/313737. PMID 10722421. - Montgomery SP, Chow CC, Smith SW, Marfin AA, O'Leary DR, Campbell GL (2005). "Rhabdomyolysis in patients with west nile encephalitis and meningitis". Vector-Borne and Zoonotic Diseases 5 (3): 252–7. doi:10.1089/vbz.2005.5.252. PMID 16187894. - Smith RD, Konoplev S, DeCourten-Myers G, Brown T (February 2004). "West Nile virus encephalitis with myositis and orchitis". Hum. Pathol. 35 (2): 254–8. doi:10.1016/j.humpath.2003.09.007. PMID 14991545. - Anninger WV, Lomeo MD, Dingle J, Epstein AD, Lubow M (2003). "West Nile virus-associated optic neuritis and chorioretinitis". Am. J. Ophthalmol. 136 (6): 1183–5. doi:10.1016/S0002-9394(03)00738-4. PMID 14644244. - Paddock CD, Nicholson WL, Bhatnagar J, et al. (June 2006). "Fatal hemorrhagic fever caused by West Nile virus in the United States". Clin. Infect. Dis. 42 (11): 1527–35. doi:10.1086/503841. PMID 16652309. - Shaikh S, Trese MT (2004). "West Nile virus chorioretinitis". Br J Ophthalmol 88 (12): 1599–60. doi:10.1136/bjo.2004.049460. PMC 1772450. PMID 15548822. - Anderson RC, Horn KB, Hoang MP, Gottlieb E, Bennin B (November 2004). "Punctate exanthem of West Nile Virus infection: report of 3 cases". J. Am. Acad. Dermatol. 51 (5): 820–3. doi:10.1016/j.jaad.2004.05.031. PMID 15523368. - Lanciotti RS, Ebel GD, Deubel V, et al. (June 2002). "Complete genome sequences and phylogenetic analysis of West Nile virus strains isolated from the United States, Europe, and the Middle East". Virology 298 (1): 96–105. doi:10.1006/viro.2002.1449. PMID 12093177. - Galli M, Bernini F, Zehender G (July 2004). "Alexander the Great and West Nile virus encephalitis". Emerging Infect. Dis. 10 (7): 1330–2; author reply 1332–3. doi:10.3201/eid1007.040396. PMID 15338540. - West, Christy (2010-02-08). "Different West Nile Virus Genetic Lineage Evolving?". The Horse. Retrieved 2010-02-10. From statements by Orsolya Kutasi, DVM, of the Szent Istvan University, Hungary at the 2009 American Association of Equine Practitioners Convention, December 5–9, 2009. - Venter M, Human S, van Niekerk S, Williams J, van Eeden C, Freeman F (August 2011). "Fatal neurologic disease and abortion in mare infected with lineage 1 West Nile virus, South Africa". Emerging Infect. Dis. 17 (8): 1534–6. doi:10.3201/eid1708.101794. PMC 3381566. PMID 21801644. - St Leger J, Wu G, Anderson M, Dalton L, Nilson E, Wang D (2011). "West Nile virus infection in killer whale, Texas, USA, 2007". Emerging Infect. Dis. 17 (8): 1531–3. doi:10.3201/eid1708.101979. PMC 3381582. PMID 21801643. - Hogan, C. Michael (2008). Barbary Macaque: Macaca sylvanus, GlobalTwitcher.com - Hayes EB, Komar N, Nasci RS, Montgomery SP, O'Leary DR, Campbell GL (2005). "Epidemiology and transmission dynamics of West Nile virus disease". Emerging Infect. Dis. 11 (8): 1167–73. doi:10.3201/eid1108.050289a. PMC 3320478. PMID 16102302. - Kilpatrick, A.M. (2011). "Globalization, land use, and the invasion of West Nile virus". Science 334 (6054): 323–327. doi:10.1126/science.1201010. PMC 3346291. PMID 22021850. - Kilpatrick, AM, P Daszak, MJ Jones, PP Marra, LD Kramer (2006). "Host heterogeneity dominates West Nile virus transmission". Proceedings of the Royal Society B-Biological Sciences 273 (1599): 2327–2333. doi:10.1098/rspb.2006.3575. PMC 1636093. PMID 16928635. - Kilpatrick, AM, SL LaDeau, PP Marra (2007). "Ecology of West Nile virus transmission and its impact on birds in the western hemisphere". Auk 124 (4): 1121–1136. doi:10.1642/0004-8038(2007)124[1121:eownvt]2.0.co;2. - Komar, N, S Langevin, S Hinten, N Nemeth, E Edwards, D Hettler, B Davis, R Bowen, M Bunning (2003). "Experimental infection of North American birds with the New York 1999 strain of West Nile virus". Emerging Infectious Diseases 9 (3): 311–322. doi:10.3201/eid0903.020628. PMC 2958552. PMID 12643825. - Centers for Disease Control and Prevention (CDC) (2002). "Laboratory-acquired West Nile virus infections—United States, 2002". MMWR Morb. Mortal. Wkly. Rep. 51 (50): 1133–5. PMID 12537288. - Fonseca K, Prince GD, Bratvold J, et al. (2005). "West Nile virus infection and conjunctive exposure". Emerging Infect. Dis. 11 (10): 1648–9. doi:10.3201/eid1110.040212. PMC 3366727. PMID 16355512. - Centers for Disease Control and Prevention (CDC) (2002). "Investigation of blood transfusion recipients with West Nile virus infections". MMWR Morb. Mortal. Wkly. Rep. 51 (36): 823. PMID 12269472. - Centers for Disease Control and Prevention (CDC) (2002). "West Nile virus infection in organ donor and transplant recipients—Georgia and Florida, 2002". MMWR Morb. Mortal. Wkly. Rep. 51 (35): 790. PMID 12227442. - Centers for Disease Control and Prevention (CDC) (2002). "Intrauterine West Nile virus infection—New York, 2002". MMWR Morb. Mortal. Wkly. Rep. 51 (50): 1135–6. PMID 12537289. - Centers for Disease Control and Prevention (CDC) (2002). "Possible West Nile virus transmission to an infant through breast-feeding—Michigan, 2002". MMWR Morb. Mortal. Wkly. Rep. 51 (39): 877–8. PMID 12375687. - Centers for Disease Control and Prevention (CDC) (2003). "Detection of West Nile virus in blood donations—United States, 2003". MMWR Morb. Mortal. Wkly. Rep. 52 (32): 769–72. PMID 12917583. - West Nile Virus. Scottish National Blood Transfusion Service. - Schneider BS, McGee CE, Jordan JM, Stevenson HL, Soong L, Higgs S (2007). Baylis, Matthew, ed. "Prior exposure to uninfected mosquitoes enhances mortality in naturally-transmitted West Nile virus infection". PLoS ONE 2 (11): e1171. doi:10.1371/journal.pone.0001171. PMC 2048662. PMID 18000543. - Styer LM, Bernard KA, Kramer LD (2006). "Enhanced early West Nile virus infection in young chickens infected by mosquito bite: effect of viral dose". Am. J. Trop. Med. Hyg. 75 (2): 337–45. PMID 16896145. - Schneider BS, Soong L, Girard YA, Campbell G, Mason P, Higgs S (2006). "Potentiation of West Nile encephalitis by mosquito feeding". Viral Immunol. 19 (1): 74–82. doi:10.1089/vim.2006.19.74. PMID 16553552. - Wasserman HA, Singh S, Champagne DE (2004). "Saliva of the Yellow Fever mosquito, Aedes aegypti, modulates murine lymphocyte function". Parasite Immunol. 26 (6–7): 295–306. doi:10.1111/j.0141-9838.2004.00712.x. PMID 15541033. - Limesand KH, Higgs S, Pearson LD, Beaty BJ (2003). "Effect of mosquito salivary gland treatment on vesicular stomatitis New Jersey virus replication and interferon alpha/beta expression in vitro". J. Med. Entomol. 40 (2): 199–205. doi:10.1603/0022-2585-40.2.199. PMID 12693849. - Wanasen N, Nussenzveig RH, Champagne DE, Soong L, Higgs S (2004). "Differential modulation of murine host immune response by salivary gland extracts from the mosquitoes Aedes aegypti and Culex quinquefasciatus". Med. Vet. Entomol. 18 (2): 191–9. doi:10.1111/j.1365-2915.2004.00498.x. PMID 15189245. - Zeidner NS, Higgs S, Happ CM, Beaty BJ, Miller BR (1999). "Mosquito feeding modulates Th1 and Th2 cytokines in flavivirus susceptible mice: an effect mimicked by injection of sialokinins, but not demonstrated in flavivirus resistant mice". Parasite Immunol. 21 (1): 35–44. doi:10.1046/j.1365-3024.1999.00199.x. PMID 10081770. - Schneider BS, Soong L, Zeidner NS, Higgs S (2004). "Aedes aegypti salivary gland extracts modulate anti-viral and TH1/TH2 cytokine responses to sindbis virus infection". Viral Immunol. 17 (4): 565–73. doi:10.1089/vim.2004.17.565. PMID 15671753. - Bugbee, LM; Forte LR (September 2004). "The discovery of West Nile virus in overwintering Culex pipiens (Diptera: Culicidae) mosquitoes in Lehigh County, Pennsylvania". Journal of the American Mosquito Control Association 20 (3): 326–7. PMID 15532939. - Goddard LB, Roth AE, Reisen WK, Scott TW (November 2003). "Vertical transmission of West Nile Virus by three California Culex (Diptera: Culicidae) species". J. Med. Entomol. 40 (6): 743–6. doi:10.1603/0022-2585-40.6.743. PMID 14765647. - Kumar D, Drebot MA, Wong SJ, et al. (2004). "A seroprevalence study of West Nile virus infection in solid organ transplant recipients". Am. J. Transplant. 4 (11): 1883–8. doi:10.1111/j.1600-6143.2004.00592.x. PMID 15476490. - Jean CM, Honarmand S, Louie JK, Glaser CA (December 2007). "Risk factors for West Nile virus neuroinvasive disease, California, 2005". Emerging Infect. Dis. 13 (12): 1918–20. doi:10.3201/eid1312.061265. PMC 2876738. PMID 18258047. - Kumar D, Drebot MA, Wong SJ, et al. (2004). "A seroprevalence study of west nile virus infection in solid organ transplant recipients". Am. J. Transplant. 4 (11): 1883–8. doi:10.1111/j.1600-6143.2004.00592.x. PMID 15476490. - Glass, WG; Lim JK; Cholera R; Pletnev AG; Gao JL; Murphy PM (October 17, 2005). "Chemokine receptor CCR5 promotes leukocyte trafficking to the brain and survival in West Nile virus infection". Journal of Experimental Medicine 202 (8): 1087–98. doi:10.1084/jem.20042530. PMC 2213214. PMID 16230476. - Glass, WG; McDermott DH; Lim JK; Lekhong S; Yu SF; Frank WA; Pape J; Cheshier RC; Murphy PM (January 23, 2006). "CCR5 deficiency increases risk of symptomatic West Nile virus infection". Journal of Experimental Medicine 203 (1): 35–40. doi:10.1084/jem.20051970. PMC 2118086. PMID 16418398. - Tyler KL, Pape J, Goody RJ, Corkill M, Kleinschmidt-DeMasters BK (February 2006). "CSF findings in 250 patients with serologically confirmed West Nile virus meningitis and encephalitis". Neurology 66 (3): 361–5. doi:10.1212/01.wnl.0000195890.70898.1f. PMID 16382032. - "2012 DOHMH Advisory #8: West Nile Virus" (PDF). New York City Department of Health and Mental Hygiene. June 28, 2012. - Papa A, Karabaxoglou D, Kansouzidou A (October 2011). "Acute West Nile virus neuroinvasive infections: cross-reactivity with dengue virus and tick-borne encephalitis virus". J. Med. Virol. 83 (10): 1861–5. doi:10.1002/jmv.22180. PMID 21837806. - Rios L, Maruniak JE (October 2011). "Asian Tiger Mosquito, Aedes albopictus (Skuse) (Insecta: Diptera: Culicidae)". Department of Entomology and Nematology, University of Florida. EENY-319. - Jozan, M; Evans R; McLean R; Hall R; Tangredi B; Reed L; Scott J (Fall 2003). "Detection of West Nile virus infection in birds in the United States by blocking ELISA and immunohistochemistry". Vector-Borne and Zoonotic Diseases 3 (3): 99–110. doi:10.1089/153036603768395799. PMID 14511579. - Hall, RA; Broom AK; Hartnett AC; Howard MJ; Mackenzie JS (February 1995). "Immunodominant epitopes on the NS1 protein of MVE and KUN viruses serve as targets for a blocking ELISA to detect virus-specific antibodies in sentinel animal serum". Journal of Virological Methods 51 (2–3): 201–10. doi:10.1016/0166-0934(94)00105-P. PMID 7738140. - California Department of Public Health Tutorial for Local Agencies to Safely Collect Dead Birds Oral Swab Samples on RNAse Cards for West Nile Virus Testing - RNA virus preserving filter paper card. fortiusbio.com - "Mosquito Monitoring and Management". National Park Service. - Oklahoma State University: Mosquitoes and West Nile virus - Benedict MQ, Levine RS, Hawley WA, Lounibos LP (2007). "Spread of the tiger: global risk of invasion by the mosquito Aedes albopictus". Vector-Borne and Zoonotic Diseases 7 (1): 76–85. doi:10.1089/vbz.2006.0562. PMC 2212601. PMID 17417960. - Watson JT, Pertel PE, Jones RC, et al. (September 2004). "Clinical characteristics and functional outcomes of West Nile Fever". Ann. Intern. Med. 141 (5): 360–5. doi:10.7326/0003-4819-141-5-200409070-00010. PMID 15353427. - Klee AL, Maidin B, Edwin B, et al. (Aug 2004). "Long-term prognosis for clinical West Nile virus infection". Emerg Infect Dis 10 (8): 1405–11. doi:10.3201/eid1008.030879. PMC 3320418. PMID 15496241. - Nolan MS, Podoll AS, Hause AM, Akers KM, Finkel KW, Murray KO (2012). Wang, Tian, ed. "Prevalence of chronic kidney disease and progression of disease over time among patients enrolled in the Houston West Nile virus cohort". PLoS ONE 7 (7): e40374. doi:10.1371/journal.pone.0040374. PMC 3391259. PMID 22792293. - "New Study Reveals: West Nile virus is far more menacing & harms far more people". The Guardian Express. The Guardian Express. 26 August 2012. Retrieved 26 August 2012. - Smithburn KC, Hughes TP, Burke AW, Paul JH (June 1940). "A Neurotropic Virus Isolated from the Blood of a Native of Uganda". Am. J. Trop. Med. 20 (1): 471–92. - Work TH, Hurlbut HS, Taylor RM (1953). "Isolation of West Nile virus from hooded crow and rock pigeon in the Nile delta". Proc. Soc. Exp. Biol. Med. 84 (3): 719–22. doi:10.3181/00379727-84-20764. PMID 13134268. - Bernkopf H, Levine S, Nerson R (1953). "Isolation of West Nile virus in Israel". J. Infect. Dis. 93 (3): 207–18. doi:10.1093/infdis/93.3.207. PMID 13109233. - Calisher CH (2000). "West Nile virus in the New World: appearance, persistence, and adaptation to a new econiche—an opportunity taken". Viral Immunol. 13 (4): 411–4. doi:10.1089/vim.2000.13.411. PMID 11192287. - "West Nile virus". NIOSH. August 27, 2012. - "Vertebrate Ecology". West Nile Virus. Division of Vector-Borne Diseases, CDC. 30 April 2009. - Deas, Tia S; Bennett CJ; Jones SA; Tilgner M; Ren P; Behr MJ; Stein DA; Iversen PL; Kramer LD; Bernard KA; Shi PY (May 2007). "In vitro resistance selection and in vivo efficacy of morpholino oligomers against West Nile virus". Antimicrob Agents Chemother 51 (7): 2470–82. doi:10.1128/AAC.00069-07. PMC 1913242. PMID 17485503. - Hayes EB, Sejvar JJ, Zaki SR, Lanciotti RS, Bode AV, Campbell GL (2005). "Virology, pathology, and clinical manifestations of West Nile virus disease". Emerging Infect. Dis. 11 (8): 1174–9. doi:10.3201/eid1108.050289b. PMC 3320472. PMID 16102303. - Moskowitz DW, Johnson FE (2004). "The central role of angiotensin I-converting enzyme in vertebrate pathophysiology". Curr Top Med Chem 4 (13): 1433–54. doi:10.2174/1568026043387818. PMID 15379656. - Arroyo, J.; Miller, C.; Catalan, J.; Myers, G. A.; Ratterree, M. S.; Trent, D. W.; Monath, T. P. (2004). "ChimeriVax-West Nile Virus Live-Attenuated Vaccine: Preclinical Evaluation of Safety, Immunogenicity, and Efficacy". Journal of Virology 78 (22): 12497–12507. doi:10.1128/JVI.78.22.12497-12507.2004. PMC 525070. PMID 15507637. - Biedenbender, R.; Bevilacqua, J.; Gregg, A. M.; Watson, M.; Dayan, G. (2011). "Phase II, Randomized, Double-Blind, Placebo-Controlled, Multicenter Study to Investigate the Immunogenicity and Safety of a West Nile Virus Vaccine in Healthy Adults". Journal of Infectious Diseases 203 (1): 75–84. doi:10.1093/infdis/jiq003. PMC 3086439. PMID 21148499. |Wikimedia Commons has media related to West Nile virus.| - De Filette M, Ulbert S, Diamond M, Sanders NN (2012). "Recent progress in West Nile virus diagnosis and vaccination". Vet. Res. 43 (1): 16. doi:10.1186/1297-9716-43-16. PMC 3311072. PMID 22380523. - "West Nile Virus". Division of Vector-Borne Diseases, U.S. Centers for Disease Control and Prevention (CDC). - CDC—West Nile Virus—NIOSH Workplace Safety and Health Topic - Recommendations for Protecting Laboratory, Field, and Clinical Workers from West Nile Virus Exposure - West Nile Virus Resource Guide—National Pesticide Information Center - Vaccine Research Center (VRC)—Information concerning WNV vaccine research studies - Nature news article on West Nile paralysis - CBC News Coverage of West Nile in Canada - Gene mutation turned West Nile virus into killer disease among crows - Virus Pathogen Database and Analysis Resource (ViPR): Flaviviridae - Species Profile- West Nile Virus (Flavivirus), National Invasive Species Information Center, United States National Agricultural Library. Lists general information and resources for West Nile Virus. - 3D macromolecular structures of the West Nile Virus archived in the EM Data Bank(EMDB) - West Nile Virus—West Nile Encephalitis Brain Scans
https://en.wikipedia.org/wiki/West_Nile_fever
4.40625
The Earth-Moon System The moon is the earth's nearest neighbor in space. In addition to its proximity, the moon is also exceptional in that it is quite massive compared to the earth itself, the ratio of their masses being far larger than the similar ratios of other natural satellites to the planets they orbit (though that of Charon and the dwarf planet Pluto exceeds that of the moon and earth). For this reason, the earth-moon system is sometimes considered a double planet. It is the center of the earth-moon system, rather than the center of the earth itself, that describes an elliptical orbit around the sun in accordance with Kepler's laws. It is also more accurate to say that the earth and moon together revolve about their common center of mass, rather than saying that the moon revolves about the earth. This common center of mass lies beneath the earth's surface, about 3,000 mi (4800 km) from the earth's center.The Lunar Month The moon was studied, and its apparent motions through the sky recorded, beginning in ancient times. The Babylonians and the Maya, for example, had remarkably precise calendars for eclipses and other astronomical events. Astronomers now recognize different kinds of months, such as the synodic month of 29 days, 12 hr, 44 min, the period of the lunar phases, and the sidereal month of 27 days, 7 hr, 43 min, the period of lunar revolution around the earth. As seen from above the earth's north pole, the moon moves in a counterclockwise direction with an average orbital speed of about 0.6 mi/sec (1 km/sec). Because the lunar orbit is elliptical, the distance between the earth and the moon varies periodically as the moon revolves in its orbit. At perigee, when the moon is nearest the earth, the distance is about 227,000 mi (365,000 km); at apogee, when the moon is farthest from the earth, the distance is about 254,000 mi (409,000 km). The average distance is about 240,000 mi (385,000 km), or about 60 times the radius of the earth itself. The plane of the moon's orbit is tilted, or inclined, at an angle of about 5° with respect to the ecliptic. The line dividing the bright and dark portions of the moon is called the terminator. Due to the earth's rotation, the moon appears to rise in the east and set in the west, like all other heavenly bodies; however, the moon's own orbital motion carries it eastward against the stars. This apparent motion is much more rapid than the similar motion of the sun. Hence the moon appears to overtake the sun and rises on an average of 50 minutes later each night. There are many variations in this retardation according to latitude and time of year. In much of the Northern Hemisphere, at the autumnal equinox, the harvest moon occurs; moonrise and sunset nearly coincide for several days around full moon. The next succeeding full moon, called the hunter's moon, also shows this coincidence. Although an optical illusion causes the moon to appear larger when it is near the horizon than when it is near the zenith, the true angular size of the moon's diameter is about 1/2°, which also happens to be the sun's apparent diameter. This coincidence makes possible total eclipses of the sun in which the solar disk is exactly covered by the disk of the moon. An eclipse of the moon occurs when the earth's shadow falls onto the moon, temporarily blocking the sunlight that causes the moon to shine. Eclipses can occur only when the moon, sun, and earth are arranged along a straight line—lunar eclipses at full moon and solar eclipses at new moon. The gravitational influence of the moon is chiefly responsible for the tides of the earth's oceans, the twice-daily rise and fall of sea level. The ocean tides are caused by the flow of water toward the two points on the earth's surface that are instantaneously directly beneath the moon and directly opposite the moon. Because of frictional drag, the earth's rotation carries the two tidal bulges slightly forward of the line connecting earth and moon. The resulting torque slows the earth's rotation while increasing the moon's orbital velocity. As a result, the day is getting longer and the moon is moving farther away from the earth. The moon also raises much smaller tides in the solid crust of the earth, deforming its shape. The tidal influence of the earth on the moon was responsible for making the moon's periods of rotation and revolution equal, so that the same side of the moon always faces earth. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Astronomy: General
http://www.infoplease.com/encyclopedia/science/moon-the-earth-moon-system.html
4.25
You Are Here Activity 1: What Is Privilege? Activity time: 5 minutes Materials for Activity - Newsprint, markers and tape Preparation for Activity - Post two sheets of newsprint. Label one "Privileges" and the other "Skills." Description of Activity Gather the children. Tell them, in your own words: Our Unitarian Universalist faith challenges us to recognize our privileges and to share them with others. We are also called to discover our gifts and skills, and then share them, too, in order to live a full life while contributing to our society. Invite the group to explore the differences between privileges and skills. Say something like: The talents, education, or access to information, resources, money and/or power that we have by chance of birth or geography are called "privileges." These are different from the skills and talents that we develop through practice. For example, having access to a piano, a piano teacher, and the time to take lessons are each privileges; being able to play a classical sonata comes from regular practice and that is a skill you learn. Now we are going to list what we understand to be our privileges and our skills. Invite volunteers to contribute to both lists. Accept all suggestions. If an item is suggested both as a privilege AND a skill, just write it down. If necessary, suggest some of these ideas: - Being picky about food (people who are hungry aren't picky) - Having a bed to sleep on at night - Having a warm home in the winter - Having a stable home where people do not act violent - Going to school - Not living in a war zone - Extra curricular activities and lessons that cost money - Access to the Internet - Toys (electronic games, especially) - Learning the same language from birth that is used in your school. - Earning good grades - Learning a new sport and staying on the team - Playing an instrument well - Being a neat writer - Building a large vocabulary. When the list looks full, engage the group with some of these questions: - Does anything on this list surprise you? - Is there something you did not think is a privilege that someone else believes is? - Have you ever thought about being privileged? - Do you think being privileged is the same as being "spoiled?" What is the difference? Keep the newsprint posted for use in Activity 2, Window/Mirror Panel - My Privilege.
http://www.uua.org/re/tapestry/children/windows/session11/143758.shtml
4.21875
Radioactivity is the emission of high energy particles through the natural phenomenon of the decay of unstable isotopes of chemical elements into more stable forms, which are called daughter products. This type of emission is generally called nuclear radiation. The most common types of nuclear radiation are alpha and beta radiation, and the processes for each are respectively alpha decay and beta decay. There can also be gamma radiation associated with a nuclear decay. Alpha particles are helium nuclei (two protons and two neutrons); beta particles are high-energy electrons; gamma rays are high-energy photons. Alpha particles can normally be stopped by a sheet of paper or healthy human skin. Beta particles and gamma rays can penetrate one's body to cause great harm. Gamma radiation is also a form of electro-magnetic radiation, like X-rays or visible light. (Contemporary jargon refers to alpha and beta particles and gamma rays, though quantum mechanics makes the two actually the same.) All forms of radioactivity follow the fundamental rules of mass and energy balance. - Main article: Alpha decay As stated above, an alpha particle is the nucleus of a Helium atom, i.e., two protons and two neutrons. This arrangement means the alpha particle has a charge of +2, and an atomic mass of 4, the symbol for which is . For example, the most common isotope of Uranium is Uranium-238. The mass number, 238, is the sum of the number of protons and the number of neutrons. Since all Uranium atoms have 92 protons, there are 146 neutrons. The initial step in Uranium-238 decaying (eventually) into Lead-206 is an alpha decay: Note that the proton count (92) is conserved, as is are the mass numbers (238). So the neutron count (146) is also balanced. No particles are created or destroyed. This decay releases about 4.3 MeV of kinetic energy, in the form of the motion of the alpha particle. In chemical terms we would say that this decay is an exothermic reaction. The energy comes from the potential nuclear energy of the Uranium atom—the Uranium atom has a higher potential energy (by 4.3 Mev) than the sum of the potential energies of the Thorium and Helium atoms. A careful accounting of the atomic masses (remember, atomic mass is only approximately equal to mass number) will show that mass was lost, in accordance with E=mc². - Main article: Beta decay As stated above, a beta particle is an electron. This arrangement means the beta particle has a charge of -1, and an atomic mass of 0, the symbol for which is . Strontium-90 undergoes beta decay to form Yttrium-90 in the following decay reaction: As with the alpha decay, notice that the particle count is again conserved. The energy released by this decay is 0.55 MeV. An interesting note about Strontium-90 is that is is a synthetic isotope (meaning, it is not found naturally occurring, but must be manufactured) that is a by-product of nuclear weapons explosions. In the 1950s and 1960s, it was common to test nuclear weapons by exploding them in the very high upper atmosphere. Unfortunately, this resulted in a large amount of Strontium-90 particles that eventually settled back to earth, contaminating grass lands. The grasses were eaten by cattle, and the cattle were eaten by humans. Since Strontium is chemically very similar to Calcium (it is in the same column in the Periodic Table), any entrance of Strontium in the body will tend to replace the Calcium in our bones. In the case of Strontium-90, this meant that radioactive Strontium was now chemically bonded to our bones, and the 1970s saw a rise in bone cancer as a result. Fortunately, this type of testing was halted, and the half-life of Strontium-90 is a relatively short 28 years, meaning, at this point most of the synthetic, radioactive Strontium-90 produced by weapons testing has decayed out of the environment. Energy of Radioactive Decay The energies associated with radioactive decay, at least on the single atom level, are very, very small. The energies are so small, in fact, we use a special unit called the Electron Volt (eV) rather than the tradition units of Joules or Btu or Foot-pounds. Just for a point of comparison, it takes about a minute to boil a cup of water in a 1000 Watt microwave. One Watt is equivalent to one Joule per second, so it takes 60,000 Joules of energy to boil a cup of water. But a single Joule of energy is the same as 6.24x1018 eV! So even when a nuclear decay has an associated energy in the thousands (keV) or millions (MeV) of electron volts, we're still billions of factors away from having enough energy to boil a cup of water. The danger is not from a single atomic decay, but from many trillions of atomic decays occurring within rapid succession, in which case we do reach energies capable of producing serious burns on the skin. In the early decades of the 20th century, when it was a newly discovered phenomenon, there was much confusion about it, leading to some strange hypotheses. No less an intellectual than H. G. Wells wrote strange things about it. In his 1909 novel Tono-Bungay, the narrator muses: - To my mind radio-activity is a real disease of matter. Moreover, it is a contagious disease. It spreads. You bring those debased and crumbling atoms near others and those too presently catch the trick of swinging themselves out of coherent existence. It is in matter exactly what the decay of our old culture is in society, a loss of traditions and distinctions and assured reactions. ...I am haunted by a grotesque fancy of the ultimate eating away and dry-rotting and dispersal of all our world. So that while man still struggles and dreams his very substance will change and crumble from beneath him. I mention this here as a queer persistent fancy. Suppose, indeed, that is to be the end of our planet; no splendid climax and finale, no towering accumulation of achievements, but just—atomic decay! This is not to say that radioactivity isn't dangerous, or that radioactive contamination isn't a tricky problem. Notes and references - The mass number is the (integer) number of nucleons. It is very close to the atomic mass of the isotope measured in amu, because the masses of protons and neutrons are very close, and the deviations (due to E=mc²) are small. So the terms are often used nearly interchangeably. - Wells, H. G. (1909) Tono-Bungay, online Project Gutenberg text; search for text string "real disease"
http://www.conservapedia.com/Radioactive_decay
4.03125
If you like us, please share us on social media, tell your friends, tell your professor or consider building or adopting a Wikitext for your course. HA ↔ H+ + A- An acid, a proton donor, donates protons in water forming hydronium ions (protonated water, H3O+). A base, proton acceptor, removes protons from water forming hydroxide ions (deprotonated water, OH-) Using curved arrows to demonstrate the mechanism by which a proton is transferred from hydrochloric acid to the base water. The arrows demonstrate the movement of electrons but the key feature of the Bronsted Lowry reaction is the transfer of protons. A lone pair from the base creates a new bond with an acidic proton and the electron pair originally linking the proton to the remainder of the acid shifts to becomes a lone pair on the departing conjugate base. Water is a neutral compound ( # hydronium ions = # hydroxide ions via self-dissociation). Equilibrium constant Kw (self-ionization constant) describes this process at 25oC H2O + H2O< Kw > H3O+ + OH- Kw = [H3O+][OH-] = 10-14 mol2L-2 pH = the negative logarithm of the value of [H3O+]. The concentration of H3O+ in pure water = 10-7 mol L-1 pH = - log [H3O+] pH in pure water = +7 pH>7 = basic pH<7 = acidic acidity of a general acid (HA) is conveyed by a general equation: HA + H2O < K > H3O + A- K= [H3O+][A-] / [HA][H2O] Acidity constant (Ka) = K[H2O]= [H3O+][A-] / [HA] mol L-1 like H3O+, Ka can be put to a logarithmic scale pKa = -log Ka pKa describes the pH of an acid at 50% dissociation. If pKa<1 = strong, pKa>4 = weak acid. Please refer to the chart below for pKa of common acids. A- derived from acid HA, is referred to as the conjugate base A- derived from base HA, is referred to as the conjugate acid acid + base <---> conjugate base + conjugate acid Conjugate acid and bases are inversely related: strong acid = weak conjugate base strong base = weak conjugate acid Ex. HCl (strong acid) ↔ H+ + Cl- (weak conjugate base) CH3OH (weak acid) ↔ H+ + CH3O- (strong conjugate base) Relative strength of an acid (HA) and weakness of conjugate base can be estimated using three structural properties: Summary: Basicity of A- decreases to the right and down the periodic table, acidity of HA increases to the right and down the periodic table. Several molecules have the ability to act as acids or bases under differing conditions, thus they are Amphoteric. ex. water, nitric acid, acetic acid H3O+ <water accepts a proton (base) H2O water donates a proton (acid)> OH- Lewis acid = electron pair acceptor Lewis Base = electron pair donator Lewis base share its lone pair electrons with a lewis acid to form a new covalent bond, thus can be expressed by an arrow moving in the direction of electron movement (base to acid) Electrophiles and Nucleophiles interact through movement of an electron pair Processes that exhibit very similar characteristics as acid-base reactions and are described using the same electron pushing arrows. Electrophile "electron loving": An electron deficient atom, ion or molecule that has an affinity for an electron pair and will bond to a base or nucleophile. (all lewis acids are electrophiles) Nucleophile "nucleus loving": An atom ion or molecule that has an electron pair that may be donated in bonding to an electrophile or lewis acid. (all nucleophiles are lewis bases) The diagram below demonstrates the flow of electrons using electron pushing arrows: Haloalkanes (compounds with carbon-halogen bonds) are general nucleophilic substitution reactions. Despite differing halogens and arrangement of substituents , all arrangements/combinations behave similarly allowing us to conclude that it is the actual presence of the carbon-halogen bond that controls the behavior of the haloalkane. The C-X bond is the functional group/controlling factor of reactivity. CH3I + NH3 --> CH3NH3+ + I-
http://chemwiki.ucdavis.edu/Core/Organic_Chemistry/Fundamentals/Acids_and_Bases%3B_Electrophiles_and_Nucleophiles
4.03125
History of rail transport in India - This article is part of the history of rail transport by country series. The history of rail transport in India began in the mid-nineteenth century. Prior to 1850, there were no railway lines in the country. This changed with the first railway in 1853. Railways were gradually developed, for a short while by the British East India Company and subsequently by the Colonial British Government, primarily to transport troops for their numerous wars, and secondly to transport cotton for export to mills in UK. Transport of Indian passengers received little interest till 1947 when India got freedom and started to develop railways in a more judicious manner. By 1929, there were 66,000 km (41,000 mi) of railway lines serving most of the districts in the country. At that point of time, the railways represented a capital value of some £687 million, and carried over 620 million passengers and approximately 90 million tons of goods a year. The railways in India were a group of privately owned companies, mostly with British shareholders and whose profits invariably returned to Britain. The military engineers of the East India Company, later of the British Indian Army, contributed to the birth and growth of the railways which gradually became the responsibility of civilian technocrats and engineers. However, construction and operation of rail transportation in the North West Frontier Province and in foreign nations during war or for military purposes was the responsibility of the military engineers. The first train in the country had run between Roorkee and Piran Kaliyar on December 22, 1851 to temporarily solve the then irrigation problems of farmers, large quantity of clay was required which was available in Piran Kaliyar area, 10 km away from Roorkee. The necessity to bring clay compelled the engineers to think of the possibility of running a train between the two points. In 1845, along with Sir Jamsetjee Jejeebhoy, Hon. Jaganath Shunkerseth (known as Nana Shankarsheth) formed the Indian Railway Association. Eventually, the association was incorporated into the Great Indian Peninsula Railway, and Jeejeebhoy and Shankarsheth became the only two Indians among the ten directors of the GIP railways. As a director, Shankarsheth participated in the very first commercial train journey in India between Bombay and Thane on 16 April 1853 in a 14 carriage long train drawn by 3 locomotives named Sultan, Sindh and Sahib. It was around 21 miles in length and took approximately 45 minutes. A British engineer, Robert Maitland Brereton, was responsible for the expansion of the railways from 1857 onwards. The Calcutta-Allahabad-Delhi line was completed by 1864. The Allahabad-Jabalpur branch line of the East Indian Railway opened in June 1867. Brereton was responsible for linking this with the Great Indian Peninsula Railway, resulting in a combined network of 6,400 km (4,000 mi). Hence it became possible to travel directly from Bombay to Calcutta via Allahabad. This route was officially opened on 7 March 1870 and it was part of the inspiration for French writer Jules Verne's book Around the World in Eighty Days. At the opening ceremony, the Viceroy Lord Mayo concluded that "it was thought desirable that, if possible, at the earliest possible moment, the whole country should be covered with a network of lines in a uniform system". By 1875, about £95 million (equal to £117 billion in 2012) were invested by British companies in Indian guaranteed railways. It later transpired that there was heavy corruption in these investments, on the part of both, members of the British Colonial Government in India, and companies who supplied machinery and steel in Britain. This resulted in railway lines and equipment costing nearly double what they should have costed. By 1880 the network route was about 14,500 km (9,000 mi), mostly radiating inward from the three major port cities of Bombay, Madras and Calcutta. By 1895, India had started building its own locomotives and in 1896 sent engineers and locomotives to help build the Uganda Railways. In 1900, the GIPR became a British government owned company. The network spread to the modern day states of Assam, Rajasthan, Telangana and Andhra Pradesh and soon various independent kingdoms began to have their own rail systems. In 1901, an early Railway Board was constituted, but the powers were formally invested under Lord Curzon. It served under the Department of Commerce and Industry and had a government railway official serving as chairman, and a railway manager from England and an agent of one of the company railways as the other two members. For the first time in its history, the Railways began to make a profit. In 1907, almost all the rail companies were taken over by the government. The following year, the first electric locomotive made its appearance. With the arrival of World War I, the railways were used to meet the needs of the British outside India. With the end of the war, the railways were in a state of disrepair and collapse. In 1920, with the network having expanded to 61,220 km, a need for central management was mooted by Sir William Acworth. Based on the East India Railway Committee chaired by Acworth, the government took over the management of the Railways and detached the finances of the Railways from other governmental revenues. The growth of the rail network significantly decreased the impact of famine in India. According to Robin Burgess and Dave Donaldson, "the ability of rainfall shortages to cause famine disappeared almost completely after the arrival of railroads." The period between 1920 and 1929 was a period of economic boom. Following the Great Depression, however, the company suffered economically for the next eight years. The Second World War severely crippled the railways. Trains were diverted to the Middle East and later, the Far East to combat the Japanese. Railway workshops were converted to ammunitions workshops and some tracks (such as Churchgate to Colaba in Bombay) were dismantled for use in war in other countries. By 1946 all rail systems had been taken over by the government. In 1904, the idea to electrify the railway network was proposed by W.H White, chief engineer of the then Bombay Presidency government. He proposed the electrification of the two Bombay-based companies, the Great Indian Peninsula Railway and the Bombay Baroda and Central India Railway (now known as CR and WR respectively). Both the companies were in favour of the proposal. However, it took another year to obtain necessary permissions from the British government and to upgrade the railway infrastructure in Bombay city. The government of India appointed Mr Merz as a consultant to give an opinion on the electrification of railways. But Mr Merz resigned before making any concrete suggestions, except the replacement of the first Vasai bridge on the BB&CI by a stronger one. Moreover, as the project was in the process of being executed, the First World War broke out and put the brakes on the project. The First World War placed heavy strain on the railway infrastructure in India. Railway production in the country was diverted to meet the needs of British forces outside India. By the end of the war, Indian Railways were in a state of dilapidation and disrepair. By 1920, Mr Merz formed a consultancy firm of his own with a partner, Mr Maclellan. The government retained his firm for the railway electrification project. Plans were drawn up for rolling stock and electric infrastructure for Bombay-Poona/Igatpuri/Vasai and Madras Tambaram routes. The secretary of state of India sanctioned these schemes in October 1920. All the inputs for the electrification, except power supply, were imported from various companies in England. And similar to the running of the first ever railway train from Bombay to Thane on April 16, 1853, the first-ever electric train in India also ran from Bombay. The debut journey, however, was a shorter one. The first electric train ran between Bombay (Victoria Terminus) and Kurla, a distance of 16 km, on February 3, 1925 along the city’s harbour route. The section was electrified on a 1,500 volts DC. The opening ceremony was performed by Sir Leslie Wilson, the governor of Bombay, at Victoria Terminus station in presence of a very large and distinguished gathering. India's first electric locos (two of them), however, had already made their appearance on the Indian soil much earlier. They were delivered to the Mysore Gold Fields by Bagnalls (Stafford) with overhead electrical equipment by Siemens as early as 1910. Various sections on the railway network were progressively electrified and commissioned between 1925 to 1930. In 1956, the government decided to adopt 25kV AC single-phase traction as a standard for the Indian Railways to meet the challenge of the growing traffic. An organisation called the Main Line Electrification Project, which later became the Railway Electrification Project and still later the Central Organisation for Railway Electrification, was established. The first 25kV AC traction section in India is Burdwan-Mughalsarai via the Grand Chord. Corruption in British Indian Railways Sweeney (2015) Describes the large scale corruption that existed in the financing of British Indian railways, from its commencement in 1850s when tracks were being laid out and later in its operation. The ruling colonial British government were too focussed on transporting goods for export to Britain, and hence did not use them to transport food instead to prevent famines such as the Great Bengal famines in 1905 and 1942. Indian economic development was never considered while deciding the rail network or places to be connected. It also resulted in the construction of many white elephants paid for by the natives, as commercial interests lobbied government officials with kickbacks. Government officials of the railways, especially ICS officials, and British nationals who participated in decision making such as James Mackay of Bengal were later rewarded after retirement with directorships in the City or the London headoffices and board rooms of these very so-called Indian railway companies, Poor resource allocation resulted in losses of hundreds of millions of pounds for Indians, including those in opportunity costs. Most shareholders of the railway companies set up were British. The head offices of most of these companies were in London, thus allowing Indian money to flow out of the country legally. Result, the railway debt made up nearly 50% of the Indian national debt from 1903 to 1945. Roberts and Minto spent large amounts trying to develop the Indian railways in the North west frontier province, resultign in large disproportionate losses. Guaranteed and subsidised companies were floated to run the railways, large guarantee payments were made despite there being a famine in Bengal. EIR, GIPR and Bombay Baroda (all operating in India and registered in London) had monopolies which generated profits, however these were never reinvested for the development of India. Start of Independent Indian Railways Following independence in 1947, India inherited a decrepit rail network. About 40 per cent of the railway lines were in the newly created Pakistan. Many lines had to be rerouted through Indian territory and new lines had to be constructed to connect important cities such as Jammu. A total of 42 separate railway systems, including 32 lines owned by the former Indian princely states existed at the time of independence spanning a total of 55,000 km. These were amalgamated into the Indian Railways. Since then, independent India has more than quadrupled the length of railway lines in the country. In 1952, it was decided to replace the existing rail networks by zones. A total of six zones came into being in 1952. As India developed its economy, almost all railway production units started to be built indigenously. The Railways began to electrify its lines to AC. On 6 September 2003 six further zones were made from existing zones for administration purpose and one more zone added in 2006. The Indian Railways has now sixteen zones. In 1985, steam locomotives were phased out. In 1987, computerization of reservation first was carried out in Bombay and in 1989 the train numbers were standardised to four digits. In 1995, the entire railway reservation was computerised through the railway's internet. In 1998, the Konkan Railway was opened, spanning difficult terrain through the Western Ghats. In 1984 Kolkata became the first Indian city to get a metro rail system, followed by the Delhi Metro in 2002, Bangalore's Namma Metro in 2011, the Mumbai Metro and Mumbai Monorail in 2014 and Chennai Metro in 2015 . Many other Indian cities are currently planning urban rapid transit systems. - Dalrymple, William (4 March 2015). "The East India Company: The original corporate raiders". The Guardian. Retrieved 16 August 2015. - Sandes, Lt Col E.W.C. (1935). The Military Engineer in India, Vol II. Chatham: The Institution of Royal Engineers. - "Postindependence: from dominance to decline". http://www.britannica.com/. Britanica Portal. Retrieved 24 June 2014. External link in - R.P. Saxena, Indian Railway History Timeline - British investment in Indian railway reaches £100m by 1875 - Burgess, Robin; Donaldson, Dave (2010). "Can Openness Mitigate the Effects of Weather Shocks? Evidence from India's Famine Era.". American Economic Review 100 (2): 453 in pages 449–53. doi:10.1257/aer.100.2.449. Retrieved 28 July 2015. - Sweenety, Stuart (2015). Financing India's Imperial Railways, 1875–1914. London: Routledge. pp. 186–188. ISBN 1317323777. - Tharoor, Shashi. "How a Debate Was Won in London Against British Colonisation of India". NDTV News. Retrieved 16 August 2015. - Andrew, W. P. (1884). Indian Railways. London: W H Allen. - Awasthi, A. (1994). History and Development of Railways in India. New Delhi: Deep and Deep Publications. - Bhandari, R.R. (2006). Indian railways : Glorious 150 Years (2nd ed.). New Delhi: Publications Division, Ministry of Information & Broadcasting, Govt. of India. ISBN 8123012543. - Ghosh, S. (2002). Railways in India – A Legend. Kolkata: Jogemaya Prokashani. - Government of India Railway Board (1919). History of Indian Railways Constructed and In Progress corrected up to 31st March 1918. India: Government Central Press. - Hurd, John; Kerr, Ian J. (2012). India's Railway History: A Research Handbook. Handbook of Oriental Studies. Section 2, South Asia, 27. Leiden; Boston: Brill. ISBN 9789004230033. - Huddleston, George (1906). History of the East Indian Railway. Calcutta: Thacker, Spink and Co. - Kerr, Ian J. (1995). Building the Railways of the Raj. Delhi: Oxford University Press. - Kerr, Ian J. (2001). Railways in Modern India. Oxford in India Readings. New Delhi; New York: Oxford University Press. ISBN 0195648285. - Kerr, Ian J. (2007). Engines of Change: the railroads that made India. Engines of Change series. Westport, Conn, USA: Praeger. ISBN 0275985644. - Khosalā, Guradiāla Siṅgha (1988). A History of Indian Railways. New Delhi: Ministry of Railways, Railway Board, Government of India. OCLC 311273060. - Law Commission (England and Wales) (2007) PDF (1.62 MiB) - Rao, M.A. (1999). Indian Railways (3rd ed.). New Delhi: National Book Trust, India. ISBN 8123725892. - Sahni, Jogendra Nath (1953). Indian Railways: One Hundred Years, 1853 to 1953. New Delhi: Ministry of Railways (Railway Board). OCLC 3153177. - Satow, M. & Desmond R. (1980). Railways of the Raj. London: Scolar Press. - South Indian Railway Co. (1900). Illustrated Guide to the South Indian Railway Company, Including the Mayavaram-Mutupet and Peralam-Karaikkal Railways. Madras: Higginbotham. - — (1910). Illustrated Guide to the South Indian Railway Company. London. - — (2004) . Illustrated Guide to the South Indian Railway Company. Asian Educational Services. ISBN 81-206-1889-0. - Vaidyanathan, K.R. (2003). 150 Glorious Years of Indian Railways. Mumbai: English Edition Publishers and Distributors (India). ISBN 8187853492. - Westwood, J.N. (1974). Railways of India. Newton Abbot, Devon, UK; North Pomfret, Vt, USA: David & Charles. ISBN 071536295X. - "History of the Indian railways in chronological order". IRFC server. Indian Railways Fan Club. Retrieved 2007-10-21. - Roychoudhury, S. (2004). "A chronological history of India's railways". Retrieved 2007-10-21.
https://en.wikipedia.org/wiki/History_of_rail_transport_in_India
4.15625
As the sun heads toward its 2013 maximum, the corresponding increase in space weather may temporarily strip the radiation belts around Earth of their charged electrons. But a new study of data recorded by 11 independent spacecraft reveals that the deadly particles are blown into space rather than cast into our planet's atmosphere, as some scientists have suggested. Streams of highly charged electrons zip through the Van Allen radiation belts circling Earth. When particles from the sun collide with the planet's magnetic field, which shields Earth from the worst effects, the resulting geomagnetic storms can decrease the number of dangerous electrons. Where those particles go is something physicists have long puzzled over — and since they could wreak havoc on sensitive telecommunication satellites and pose a risk to astronauts in space, it's an important question, researchers say. At the heart of the geomagnetic storm mystery are strange dips, known as dropouts, in the number of charged particles in the radiation belts. These lapses can happen multiple times per year, but when the sun is going through an active period — as it is now — the number can increase to several times per month, scientists involved in the new study explained. [Amazing auroras from geomagnetic storms] Astronomers have previously suggested that the missing particles could have been ejected toward Earth, where they might have been absorbed by the atmosphere. This activity still could explain some of the loss, particularly that which occurs when no geomagnetic storm has been detected, but not all of it. A team of scientists from the University of California, Los Angeles, observed a geomagnetic storm in January 2011 with a plethora of instruments. They noticed that as intense solar activity pushes against the outer edge of Earth's magnetic field on the daylight side, the lines can cross, allowing the damaging electrons to escape into space. "Those particles are entirely lost," lead scientist Drew Turner told SPACE.com. The research is detailed in the Jan. 29 edition of the journal Nature Physics. Although material ejected from the sun can deplete the Earth's outer radiation belt, it can also resupply the belt with more charged particles in only a few days, Turner said. Previous studies have found that the volume of electrons can spike after a solar event. When the belts are first almost depleted, Turner's observations imply a larger influx than previously accounted for. The team used 11 different satellites, including NASA's five Themis spacecraft and two weather satellites operated by the National Oceanic and Atmospheric Administration and the European Organization for the Exploitation of Meteorological Satellites, to study a small geomagnetic storm. The abundance of spacecraft allowed them to capture a complete picture of the interactions between Earth's magnetic field and the particles streaming from the sun. "It's impossible to get the sense of the entire process with one pinpoint of information," Turner said. He called the lineup of the various crafts "lucky." The upcoming launch of NASA's Radiation Belt Storm Probes Mission (RBSP), scheduled for August 2012, may help to remove some elements of chance from further studies. "RBSP will provide two more points of view with perfect instruments for radiation belt studies," he said.
http://www.space.com/14400-killer-electrons-radiation-belt-space.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+spaceheadlines+%28SPACE.com+Headline+Feed%29
4.15625
Have you ever tried to compare rational numbers? Take a look at this dilemma. Terry is studying the stock market. She notices that in one day, the stock that she was tracking has lost value. It decreased .5%. On the next day, it lost value again. This time decreasing .45. Which day had the worst decrease? Comparing rational numbers will help you with this task. To compare and order rational numbers, you should first convert each number to the same form so that they are easier to compare. Usually it will be easier to convert each number to a decimal. Then you can use a number line to help you order the numbers. Take a look at this situation. Place the following number on a number line in their approximate locations: 8%,18,0.8 Convert each number to a decimal. All of the numbers are between 0 and 1. You can use place value to find the correct order of the numbers. Since 0.08 has a 0 in the tenths place, 8% is the least number. Since 0.125 has a 1 in the tenths place, 18 is the next greatest number. Since 0.8 has an 8 in the tenths place, it is the greatest number. We wrote these three values on a number line. This is one way to show the different values. We can also use inequality symbols. Inequality symbols are < less than, > greater than, ≤ less than or equal to, and ≥ greater than or equal to. Here is another one. Which inequality symbol correctly compares 0.29% to 0.029? Change the percent to a decimal. Then use place value to compare the numbers. Move the decimal point two places to the left. Now compare the place value of each number. Both numbers have a 0 in the tenths place. 0.029 has a 2 in the hundredths place, while 0.0029 has a 0 in the hundredths place. So 0.0029 is less than 0.029. Remember, the key to comparing and ordering rational numbers is to be sure that they are all in the same form. You want to have all fractions, all decimals or all percentages so that your comparisons are accurate. You may need to convert before you compare!! Now let's go back to the dilemma from the beginning of the Concept. To figure out this dilemma, you have to compare .5% and .45. First, let's convert them both to percents. .5% is already a percent. .45 becomes 45% Now let's compare. .5% < 45% The second day was definitely worse. number that can be written in fraction form. the set of whole numbers and their opposites. number representing a part out of 100. a decimal that has an ending even though many digits may be present. a decimal that has an ending even though many digits may repeat. a decimal that has no ending, pi or 3.14... is an example. symbols used to compare numbers using < or >. Here is one for you to try on your own. Order the following rational numbers from least to greatest. First, let's convert them all to the same form. We could use fractions,decimals or percents, but for this situation, let's use percents. .5% stays the same. Now we can easily order them. Be sure to write them as they first appeared. This is our answer. Khan Academy Compare and Order Rational Numbers Directions: Compare each pair of rational numbers using < or >. .34 −−−−− .87 −8 −−−−− −11 16 −−−−− 78 .45 −−−−− 50% 66% −−−−− 34 .78 −−−−− 77% 49 −−−−− 25% .989898 −−−−− .35 .67 −−−−− 32% .123000 −−−−− .87 Directions: Use the order of operations to evaluate the following expressions. 3x, when x is .50 4y, when y is 34 5x+1, when x is −12 6y−7, when y is 12 3x−4x, when x is −5 6x+8y, when x is 2 and y is −4
http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts-Grade-8/r10/section/2.16/
4.09375
Phosphorus is a finite (limited) resource which is relatively scarce and is not evenly distributed across the Earth. Only a few countries have significant reserves and these are (in the order of phosphate rock economic reserves): China, Morocco and Western Sahara, United States and Jordan. Means of phosphorus production - other than mining - are unavailable because of its non-gaseous environmental cycle. The predominant source of phosphorus comes in the form of phosphate rock and in the past guano. According to some researchers, Earth's phosphorus reserves are expected to be completely depleted in 50–100 years and peak phosphorus to be reached in approximately 2030. Others suggest that supplies will last for several hundreds of years. The question is not settled and researchers in different fields regularly publish different estimates of the rock phosphate reserves. The peak phosphorus concept is connected with the concept of planetary boundaries. Phosphorus, as part of biogeochemical processes, belongs to one of the nine "Earth system processes" which are known to have boundaries. As long as the boundaries are not crossed, they mark the "safe zone" for the planet. Estimates of world phosphate reserves The accurate determination of peak phosphorus is dependent on knowing the total world's phosphate reserves and the future demand for rock phosphate. In 2012, the United States Geological Survey (USGS) estimated that phosphorus reserves worldwide are 71 billion tons, while world mining production in 2011 was 0.19 billion tons and this has been taken to mean that there were enough reserves to last for at least 370 years and possibly a lot longer. These reserve figures are widely used, but others suggest that there has been little external verification of the estimate. There are many different views as to the extent of world phosphate resources. The International Fertilizer Development Center (IFDC) in a 2010 report estimated that global phosphate rock resources would last for several hundred years. This is disputed by a recent review which concludes that the IFDC report "presents an inflated picture of global reserves, in particular those of Morocco, where largely hypothetical and inferred resources have simply been converted to “reserves". Another review suggests that it is "not very likely" that there would be significant depletion of extractable rock phosphate by 2100. "Reserves" refer to the amount assumed recoverable at current market prices and "resources" mean total estimated amounts in the Earth's crust. Phosphorus comprises 0.1% by mass of the average rock (while, for perspective, its typical concentration in vegetation is 0.03% to 0.2%), and consequently there are quadrillions of tons of phosphorus in Earth's 3 * 1019 ton crust, albeit at predominantly lower concentration than the deposits counted as reserves from being inventoried and cheaper to extract. Economists have pointed out that there do not need to be shortages of rock phosphate to cause price fluctuations, as these have already occurred due to various demand and supply side factors. Rock phosphate shortages (or just significant price increases) would have a big impact on the world's food security. Many agricultural systems depend on supplies of inorganic fertiliser, which use rock phosphate. Unless systems change, shortages of rock phosphate could lead to shortages of inorganic fertiliser, which could in term affect crop growth and cause starvation. Exhaustion of guano reserves In 1609 Garcilaso de la Vega wrote the book Comentarios Reales in which he described many of the agricultural practices of the Incas prior to the arrival of the Spaniards and introduced the use of guano as a fertilizer. As Garcilaso described, the Incas near the coast harvested guano. In the early 1800s Alexander von Humboldt introduced guano as a source of agricultural fertilizer to Europe after having discovered it on islands off the coast of South America. It has been reported that, at the time of its discovery, the guano on some islands was over 30 meters deep. The guano had previously been used by the Moche people as a source of fertilizer by mining it and transporting it back to Peru by boat. International commerce in guano didn't start until after 1840. By the start of the 20th century guano had been nearly completely depleted and was eventually overtaken with the discovery of superphosphate. Phosphorus conservation and recycling A huge amount of phosphorus is transferred from the soil in one location to another as food is transported across the world, taking the phosphorus it contains with it. Once consumed by humans, it can end up in the local environment (in the case of open defecation which is still widespread on a global scale) or in rivers or the ocean via sewage systems and sewage treatment plants in the case of cities connected to sewer systems. An example of one such crop in South America that takes up large amounts of phosphorus is soy. At the end of its journey, the phosphorus often ends up in rivers in Europe and the USA. In an effort to postpone the onset of peak phosphorus several methods of reducing and reusing phosphorus are in practice, such as in agriculture and in sanitation systems. The Soil Association, the UK organic agriculture certification and pressure group, issued a report in 2010 "A Rock and a Hard Place" encouraging more recycling of phosphorus. One potential solution to the shortage of phosphorus is greater recycling of human and animal wastes back into the environment. Reducing agricultural runoff and soil erosion can slow the frequency with which farmers have to reapply phosphorus to their fields. Agricultural methods such as no-till farming, terracing, contour tilling, and the use of windbreaks have been shown to reduce the rate of phosphorus depletion from farmland. These methods are still dependent on a periodic application of phosphate rock to the soil and as such methods to recycle the lost phosphorus have also been proposed. Perennial vegetation, such as grassland or forest is much more efficient in its use of phosphate than arable land. Strips of grassland and or forest between arable land and rivers can greatly reduce losses of phosphate and other nutrients. Integrated farming systems which use animal sources to supply phosphorus for crops do exist at smaller scales, and application of the system to a larger scale is a potential alternative for supplying the nutrient, although it would require significant changes to the widely adopted modern crop fertilizing methods. The oldest method of recycling phosphorus is through the reuse of animal manure and human excreta in agriculture. Via this method, phosphorus in the foods consumed are excreted, and the animal or human excreta are subsequently collected and re-applied to the fields. Although this method has maintained civilizations for centuries the current system of manure management is not logistically geared towards application to crop fields on a large scale. At present, manure application could not meet the phosphorus needs of large scale agriculture. Despite that, it is still an efficient method of recycling used phosphorus and returning it to the soil. Sewage treatment plants that have an enhanced biological phosphorus removal step produce a sewage sludge that is rich in phosphorus. Various processes have been developed to extract phosphorus from sewage sludge directly, from the ash after incineration of the sewage sludge or from other products of sewage sludge treatment. This includes the extraction of phosphorus rich materials such as struvite from waste processing plants. The struvite can be made by adding magnesium to the waste. Some companies such as Ostara in Canada and NuReSys in Belgium are already using this technique to recover phosphate. Ostara has eight operating plants worldwide. Research on phosphorus recovery methods from sewage sludge has been carried out in Sweden and Germany since around 2003, but the technologies currently under development are not yet cost effective, given the current price of phosphorus on the world market. - Cordell, Dana; Drangert, Jan-Olof; White, Stuart (2009). "The story of phosphorus: Global food security and food for thought". Global Environmental Change 19 (2): 292–305. doi:10.1016/j.gloenvcha.2008.10.009. ISSN 0959-3780. - Rosemarin, A. (2010). Peak Phosphorus, The Next Inconvenient Truth? - 2nd International Lecture Series on Sustainable Sanitation, World Bank, Manila, October 15, 2010. - Neset, Tina-Simone S.; Cordell, Dana (2011). "Global phosphorus scarcity: identifying synergies for a sustainable future". Journal of the Science of Food and Agriculture 92 (1): 2–6. doi:10.1002/jsfa.4650. - Lewis, Leo (23 June 2008). "Scientists warn of lack of vital phosphorus as biofuels raise demands" (PDF). Times Online. - IFDC.org - IFDC Report Indicates Adequate Phosphorus Resources, Sep-2010 - Edixhoven, J. D.; Gupta, J.; Savenije, H. H. G. (2014). "Recent revisions of phosphate rock reserves and resources: a critique". Earth System Dynamics 5 (2): 491–507. doi:10.5194/esd-5-491-2014. ISSN 2190-4987. - Rockström, J., W. Steffen, K. & 26 others (2009) Planetary boundaries: exploring the safe operating space for humanity. Ecology and Society 14(2): 32. - U.S. Geological Survey Phosphate Rock - Sutton, M.A.; Bleeker, A.; Howard, C.M.; et al. (2013). Our Nutrient World: The challenge to produce more food and energy with less pollution. Centre for Ecology and Hydrology, Edinburgh on behalf of the Global Partnership on Nutrient Management and the International Nitrogen Initiative. ISBN 978-1-906698-40-9. External link in - Gilbert, Natasha (8 October 2009). "The disappearing nutrient". Nature 461: 716–718. doi:10.1038/461716a. - Van Vuuren, D.P.; Bouwman, A.F.; Beusen, A.H.W. (2010). "Phosphorus demand for the 1970–2100 period: A scenario analysis of resource depletion". Global Environmental Change 20 (3): 428–439. doi:10.1016/j.gloenvcha.2010.04.004. ISSN 0959-3780. - U.S. Geological Survey Phosphorus Soil Samples - Abundance of Elements - American Geophysical Union, Fall Meeting 2007, abstract #V33A-1161. Mass and Composition of the Continental Crust - Heckenmüller, M.; Narita, D.; Klepper, G. (2014). "Global availability of phosphorus and its implications for global food supply: An economic overview" (PDF). Kiel Working Paper, No. 1897. Retrieved May 2015. - Amundson, R.; Berhe, A. A.; Hopmans, J. W.; Olson, C.; Sztein, A. E.; Sparks, D. L. (2015). "Soil and human security in the 21st century". Science 348 (6235): 1261071–1261071. doi:10.1126/science.1261071. ISSN 0036-8075. - Pollan, Michael (11 April 2006). The Omnivore's Dilemma: A Natural History of Four Meals. Penguin Press. ISBN 1-59420-082-3. - Leigh, G. J. (2004). The World's Greatest Fix: A History of Nitrogen and Agriculture. Oxford University Press. ISBN 0-19-516582-9. - Skaggs, Jimmy M. (May 1995). The Great Guano Rush: Entrepreneurs and American Overseas Expansion. St. Martin's Press. ISBN 0-312-12339-6. - EOS magazine, May 2013 - soilassociation.org - A rock and a hard place, Peak phosphorus and the threat to our food security, 2010 - Burns, Melinda (10 February 2010). "The Story of P(ee)". Miller-McCune. Retrieved 2 February 2012. - Udawatta, Ranjith P.; Henderson, Gray S.; Jones, John R.; Hammer, David (2011). "Phosphorus and nitrogen losses in relation to forest, pasture and row-crop land use and precipitation distribution in the midwest usa". Journal of Water Science 24 (3): 269–281. - Sartorius, C., von Horn, J., Tettenborn, F. (2011). Phosphorus recovery from wastewater – state-of-the-art and future potential. Conference presentation at Nutrient Recovery and Management Conference organised by International Water Association (IWA) and Water Environment Federation (WEF) in Florida, USA - Hultman, B., Levlin, E., Plaza, E., Stark, K. (2003). Phosphorus Recovery from Sludge in Sweden - Possibilities to meet proposed goals in an efficient, sustainable and economical way.
https://en.wikipedia.org/wiki/Peak_phosphorus
4.0625
1 Answer | Add Yours The absence of Southern members of Congress allowed the Northern Republicans (and Democrats) to act in the economic interests of the North during the Civil War. The main impact of this was to allow the Congress to pass laws that helped to develop the west. Before the war, the North and South could not agree on developing the west. Of course, the South wanted slavery to be allowed while the North did not. This blocked any real agreement on what to do. With the Southerners out of the way, Congress developed the west. In 1862, it passed three laws that were very important in this. It passed the Pacific Railroad Acts, the Homestead Act, and the Morrill Land Grant Act. These laws helped to build the railroads that brought settlers west. They helped to lure settlers with the promise of cheap land. They helped to create colleges that would help develop new and better agricultural techniques. By doing these things, the Congress was able to help to open the west to white settlement and economic development. We’ve answered 301,426 questions. We can answer yours, too.Ask a question
http://www.enotes.com/homework-help/describe-development-north-after-civil-war-how-did-377366
4.21875
- Volcano List - Learn More - All About Volcanoes - Kids Only! - Adventures and Fun - Sitemap (Under Construction) Ozone is a gas made of three oxygen atoms. Ozone is bluish in color and harmful to breathe. Most of the Earth's ozone (about 90%) is in the stratosphere. The stratosphere is a layer in the atmosphere from about 10km to about 50km in altitude. Ozone is important because it absorbs specific wavelengths of ultraviolet radiation that are particularly harmful to living organisms. The ozone layer prevents most of this harmful radiation from reaching the ground. As concern grew over depletion of ozone in the stratosphere scientists examined the role of volcanoes. They noted that the gases emitted by most eruptions never leave the troposphere, the layer in the atmosphere from the surface to about 10km. Hydrogen chloride released by volcanoes can cause drastic reductions in ozone if concentrations reach high levels (about 15-20 ppb by volume)(Prather and others, 1984). As the El Chichon eruption cloud was spreading, the amount of HCl in the cloud increased by 40% (Mankin and Coffey, 1984). This increase represents about 10% of the global inventory of HCl in the stratosphere. Other large eruptions (Tambora, Krakatau, and Agung) may have released almost ten-times more HCl into the stratosphere than the amount of chlorine commonly present in the stratosphere (Pinto and others, 1989). At least two factors reduce the impact of HCl, chlorine appears to be preferentially released during low-levels of volcanic activity and thus may be limited to the troposphere, where it can be scrubbed by rain. Hydrogen chloride may also condense in the rising volcanic plume, again to be scrubbed out by rain or ice. Lack of HCl in ice cores with high amounts of H2SO4 (from large eruptions) may indicate ambient stratospheric conditions are extremely efficient at removing HCl. Thus, most HCl never has the opportunity to react with ozone. No increase in stratospheric chlorine was observed during the 1991 eruption of Mt. Pinatubo. Volcanoes account for about 3% of chlorine in the stratosphere. Methyl chloride produces about 15% of the chlorine entering the stratosphere. The remaining 82% of stratospheric chlorine comes from man-made sources, mostly in the form of chlorofluorocarbons. Although volcanic gases do not play a direct role in destroying ozone they may play a harmful indirect role. Scientists have found that particles, or aerosols, produced by major volcanic eruptions accelerate ozone destruction. The particles themselves do not directly destroy ozone but they do provide a surface upon which chemical reactions can take place. This enhances chlorine-driven ozone depletion. Fortunately, the effects from volcanoes are short lived and after two or three years, the volcanic particles settle out of the atmosphere. Study of ozone amounts before and after the 1991 eruption of Mt. Pinatubo show that there were significant decreases in lower stratospheric ozone (Grant and others, 1994). The amount of ozone in the 16-28 km region was some reduced by 33% compared to pre-eruption amounts. A similar reduced amount of ozone was measured in the summer of 1992.
http://volcano.oregonstate.edu/ozone-destruction
4.15625
Motivation Teacher Resources Find Motivation educational ideas and activities Showing 1 - 20 of 149 resources How to Become Self Motivated Students demonstrate their self discipline and motivation by brainstorming their own personal self discipline goals. In this self discipline lesson, students analyze examples of self discipline and self control by completing a worksheet... 5th - 6th Social Studies & History Reading Poetry in the Middle Grades While this first appears to be a description of 20 poetry activities, it is actually the introduction, rationale, and explanation of the activities and one sample lesson plan for "Nothing Gold Can Stay" by Robert Frost. After a copy of... 6th - 8th English Language Arts CCSS: Designed Understanding Behaviors Required to Maintain Employment Now that your upper grader has a job, you need to teach him how to keep it. Discuss appropriate workplace behavior such as teamwork, initiative, and self-motivation. Also bridge the topic of what is and what isn't ethical behavior and... 9th - 12th 21st Century Skills Introduction PPS Writing Units of Study Imagine a year-long writing plan aligned to Common Core standards. Here it is! This resource packet, the first in the series of units, introduces the plan, provides an overview of the research-based approach, and a discussion of the key... 4th English Language Arts All About Me, My Family and Friends Pupils use general skills and strategies of the writing process to show their role in their family, school, friendships, the community and the world. They demonstrate their self-motivation and increasing responsibility for their own... 1st - 2nd English Language Arts Is an Extended School Day the Right Choice for Middle School Students? Should the school day be extended? Talk about a controversial topic! Before engaging in a fortified conversation about this topic, class members examine a chart that summarizes the views of business and political leaders, teachers,... 7th - 10th English Language Arts Tangent to a Circle From a Point Learners see application of construction techniques in a short but sophisticated problem. Combining the properties of inscribed triangles with tangent lines and radii makes a nice bridge between units, a way of using information about... 9th - 10th Math CCSS: Designed Heroes and Heroines: King David, Julius Caesar, Cleopatra and Napoleon Students identify and examine four heroes from history and imaginative literature. They discuss the characteristics of a hero and share perceptions of what makes a hero. By comparing and analyzing a few historical and literary figures,... 10th - 12th English Language Arts
http://www.lessonplanet.com/lesson-plans/motivation
4.1875
Sickle Cell Anemia Teacher Resources Find Sickle Cell Anemia educational ideas and activities Showing 1 - 20 of 120 resources Exploring Structure and Function in Biological Systems High schoolers examine different levels of organization in biological systems for structure and function relationships. In this biological systems lesson, students use Internet resources to look at structure and function in the eye, the... 9th - 12th Science Dragon Genetics ~ Independent Assortment and Genetic Linkage Imagine a pair of dragons that produce offspring and determine the percentage of the hatchlings have wings and large antlers. This fantastic activity draws genetics learners in, introduces them to alleles, meiosis, phenotypes, genotypes,... 9th - Higher Ed Science Hereditary Defects: Down Syndrome and Sickle Cell Anemia Young scholars solve problems like the following examples: 1. If you have 10,000 women, age 30, who have babies and one in 900 of these births will result in a Down syndrome baby, how many will have this disease? 2. 5,000 babies are... 5th - 8th Science Raven Chapter 13 Guided Notes: Patterns of Inheritance In this short space, it would be impossible to describe the breadth of this seven-page genetics worksheet. Geared toward AP or college biology learners, they explore not only the basic vocabulary and concepts, but also the Law of... 11th - Higher Ed Science From Gene to Protein ~ Transcription and Translation Translate the process of protein synthesis to your molecular biologists with this instructional activity. It consists of reading, completing a table as a summary, comprehension questions, and a modeling activity for both transcription... 7th - 12th Science The Making of the Fittest: Natural Selection in Humans Sickle cell disease only occurs when both parents contribute the trait, and mostly in those of African descent. Where did it come from? How did it evolve? Tony Allison, a molecular biologist, noticed a connection between sickle cell and... 14 mins 8th - Higher Ed Science CCSS: Designed Protecting Athletes with Genetic Conditions Should school and professional teams test athletes for sickle cell trait? Will it protect them by providing knowledge or lead to discrimination by not allowing them to participate in sports? After learning about this genetic disorder,... 9th - 12th Science CCSS: Designed Allele Frequencies and Sickle Cell Anemia Lab Learners investigate how selective forces like food, predation and diseases affect evolution. In this genetics lesson, students use red and white beans to simulate the effect of malaria on allele frequencies. They analyze data collected... 7th - 9th Science The Making of the Fittest: Got Lactase? The Co-evolution of Genes and Culture Got milk? Only two cultures have had it long enough to develop the tolerance of lactose as an adult. Learn how the responsible genes evolved along with the cultures that have been consuming milk. This rich film is supplied with a few... 15 mins 8th - Higher Ed Science CCSS: Designed From Gene to Protein-Transcription and Translation Students identify the different steps involved in DNA transcription. In this genetics lesson, students model the translation process. They watch a video on sickle cell anemia and explain how different alleles create this condition. 9th - 10th Science Sickle Cell Anemia - Hope from Gene Therapy Can gene therapy treat sickle cell anemia? Genetics geniuses draw a Punnett square for this painful disease and then view a video about current gene therapy research. Then they discuss ethical questions related to this type of treatment.... 10 mins 8th - 12th Science
http://www.lessonplanet.com/lesson-plans/sickle-cell-anemia
4.03125
A View from Emerging Technology from the arXiv First Observation of Gravitational Waves Is ‘Imminent’ Astronomers have underestimated the strength of gravitational waves, which means they ought to be able to see them now, say astrophysicists Gravitational waves are ripples in the fabric of spacetime caused by cataclysmic events such as neutron stars colliding and black holes merging. The biggest of these events, and the easiest to see, are the collisions between supermassive black holes at the centre of galaxies. So an important question is how often these events occur. Today, Sean McWilliams and a couple of pals at Princeton University say that astrophysicists have severely underestimated the frequency of these upheavals. Their calculations suggest that galaxy mergers are an order of magnitude more frequent than had been thought. Consequently, collisions between supermassive black holes must be more common too. That has important implications. There is an intense multimillion-dollar race to be first to spot gravitational waves, but if the researchers are correct, the evidence may already be in the data collected by the first observatories. The evidence that McWilliams and co rely on comes from various measurements of galaxy size and mass. This data shows that in the last six billion years, galaxies have roughly doubled in mass and quintupled in size. Astrophysicists know that there has been very little star formation in that time, so the only way for galaxies to grow is by merging, an idea borne out by various computer simulations of the way galaxies must evolve. These simulations suggest that galaxy mergers must be far more common than astronomers had thought. That raises an interesting prospect—that the supermassive black holes at the centre of these galaxies must be colliding more often. McWilliams and co calculate that black hole mergers must be between 10 and 30 times more common than expected and that the gravitational-wave signals from these events are between three and five times stronger. That has important implications for astronomers’ ability to see these signals. Astrophysicists are intensely interested in these waves since they offer an entirely new way to study the cosmos. One way to spot them is to measure the way the waves stretch and squeeze space as they pass through Earth, a process that requires precise laser measurements inside machines costing hundreds of millions of dollars. The most sensitive of these machines is called LIGO, the Laser Interferometer Gravitational Wave Observatory in Washington state, which is currently being upgraded; it is not due to reach its design sensitivity until 2018-19. Another method is to monitor the amazingly regular radio signals that pulsars produce and listen for the way these signals are distorted by the stretching and squeezing of space as gravitational waves pass through the solar system. So-called pulsar timing arrays largely rely on existing kit for monitoring pulsars and so are significantly cheaper than bespoke detectors. Of course, everyone has assumed that the more sensitive bespoke detectors such as LIGO will be the first to see gravitational waves, although not until the end of the decade. But all that changes if gravitational waves turn out to be stronger than thought. And that’s exactly what McWilliams and co predict. In fact, they say the waves are so strong that current pulsar monitoring kit ought to be capable of spotting them. “We calculate … that the gravitational-wave signal may already be detectable with existing data from pulsar timing arrays,” say the Princeton team. Pulsar timing arrays are also increasing in sensitivity. If McWilliams and co are correct, this makes the detection of gravitational waves a near certainty within just a few years. Their most pessimistic estimate is that pulsar timing arrays will have nailed this by 2016. “We expect a detection by 2016 with 95% confidence,” they say. That’s an extraordinary prediction and a rather refreshing one, given the general reluctance in science to nail your colours to a particular mast. The first direct observation of gravitational waves will be one of the most important breakthroughs ever made in astronomy; the discoverer a shoo-in for a Nobel Prize. So the stakes could not be higher in this race, and this time there is a distinct chance of an outside bet taking the honours. Ref: http://arxiv.org/abs/1211.4590: The Imminent Detection Of Gravitational Waves From Massive Black-Hole Binaries With Pulsar Timing Arrays
https://www.technologyreview.com/s/507811/astrophysicists-on-the-verge-of-spotting-gravitational-waves/
4.03125
Marco Polo 1254-1324 Italian merchant and traveller. A Venetian merchant, Polo was among the first travellers to the East to provide an account of that region in a Western language. His narrative, The Travels of Marco Polo, met with skepticism and disbelief upon its circulation, as the region had only previously been written about in legends such as those of Alexander the Great, and by William of Rubrouck, a French Franciscan friar who wrote a missionary's account of his trip to Mongolia upon his return to France in 1255. Many of Polo's previously unsubstantiated observations and claims were, however, confirmed by later travellers and his work is now regarded by most scholars as the first accurate description of Asia by a European. Polo was born in Venice in 1254 while his father Nicolo and his uncle Maffeo were away on a trading voyage during which they first met Kublai Khan, the Emperor of Mongolia; they did not return to Italy until Polo was about fifteen years old. The elder Polos had been instructed by the Khan to solicit the Pope for Christian missionaries to be escorted back to the Emperor's court. The Polos were forced to wait until 1270 for a new pope, Gregory X, to be elected due to the failure of the cardinals to name a successor to Pope Clement IV following his death in 1268. Polo, now about seventeen years old, accompanied his father and uncle to Mongolia following the trio's presentation of the Khan's request to Pope Gregory X. After reaching the Khan's court and being employed in his service for a number of years, the Polos desired to return to Italy. The Khan was unwilling to release the merchants from his service, but complied with their request when they agreed to travel to Persia to escort a princess betrothed to the Khan's grand-nephew. The Polos completed their mission and then began their journey home, arriving in Venice in 1295 after a twenty-four-year absence. Soon after his return, Polo was appointed to command a ship in the war between the city-states of Venice and Genoa. His fleet was defeated and he arrived in Genoa as a political prisoner on October 16, 1298. Polo was released from prison in July of 1299. He lived in Venice until his death at the age of seventy. While he was in prison, Polo had dictated his account of his travels to a fellow prisoner, Rustichello. Scholars believe that Polo's original manuscript was translated, copied, and widely circulated following his release from prison in 1299. The language of the original manuscript is unknown and a topic of much debate. In 1320, Pipino made a Latin translation of Polo's Travels from a version written in an Italian dialect, implying that this dialect version was Polo's original. Giovanni Battista Ramusio, an Italian geographer whose edition of Polo's work was published in 1559 in a collection of travel accounts known as Navigationi et viaggi, believed that the original manuscript was written in Latin. Others have maintained that Polo's work was written in French or Franco-Italian. Another source of contention among critics regards the role played by Rustichello in the writing of Travels. Some critics argue that Rustichello copied a draft already completed by Polo, or transcribed the work as Polo dictated it. Others believe that Rustichello served as a collaborator and editor, rewording Polo's phrasing and adding commentary of his own. The manuscript regarded by many critics as the most complete is a French version known as fir. 1116, published by the French Geographic Society in 1824. Some critics have contended that fr. 1116 is a true transcript of Polo's dictation to Rustichello, but other scholars such as N. M. Penzer have argued that it does not represent a direct copy of Polo's work, asserting that another manuscript (referred to by Polian scholars as Z) may antedate fr. 1116. Other groups of Polian manuscripts studied for their authenticity and their relation to the original manuscript include the Grégoire version, which critics have suggested is perhaps an elaborated version of fr. 1116; the Tuscan Recension, an early fourteenth-century Tuscan translation of a Franco-Italian version of the original manuscript; and the Venetian Recension, a group of over eighty manuscripts which have been translated into the Venetian dialect. Travels was first translated into English by John Frampton in 1579. In the nineteenth century, scholars such as William Marsden, Henry Yule, and Luigi Benedetto began to publish revisions of the work that utilized information from several manuscripts to produce a more comprehensive edition of Travels. Since the original manuscript of Travels has never been recovered, the search for the version most directly descended from it continues. Polo's The Travels of Marco Polo, his first and only known work, provides readers with a detailed description of late thirteenth-century Asia. The work includes an account of Nicolo's and Maffeo's first journey to the residence of Kublai Khan; geographical descriptions of the countries between the Black Sea, the China Sea, and the Indian Ocean; and historical narratives about the Mongolian Empire's rise and expansion. Polo's Travels also relates the author's personal adventures and his association with Kublai Khan. Polo's tone throughout the narration is that of a commercial traveller reporting what he has seen and heard. He employs the same straightforward style in discussing his own experiences as he does when he relates hearsay, which he identifies as such. Polo focused his observations on aspects such as trade, political and military structures, religious customs relating to marriage and burial of the dead, and the architecture and layout of cities. His matter-of-fact tone in the narrative emphasizes the presentation of facts over the discussion of theories or ideas. Polo's first critics, the friends and relatives to whom he verbally related his journey, refused to believe what they considered to be outrageous exaggerations or pure fiction. Yet Polo's story was appealing for its entertainment value and was rapidly copied and distributed following its initial transcription. His account did not gain credibility until after his death, when further exploration proved many of his claims. Some modern critics have faulted Polo for omitting certain subjects from the narrative: for example, Polo never mentioned tea, the practice of binding women's feet, or the Great Wall, all of which were unheard of in Europe. Polo's defenders have countered that since the merchant had lived in Mongolia for twenty-four years, subjects that would seem strange or exotic to Europeans had become commonplace in Polo's life. Others have contended that such omissions could also have been made consciously or accidentally by translators of the work. Travels is often criticized on stylistic grounds as well, for instance for shifting back and forth between first and third person narration, but scholars attribute many such faults to the numerous times the work has been translated and copied. Although many critics assess Travels as simply a merchant's pragmatic account of his stay in the East, some, like Mary Campbell, maintain that the work offers the authority of first-hand experience and argue that its value extends beyond providing enjoyment through vicarious experience in that it transforms the myth of the East into reality. The Travels of Marco Polo (translated by John Frampton) 1579 The Travels of Marco Polo, the Venetian (translated and edited by William Marsden) 1818 The Book of Ser Marco Polo (translated and edited by Henry Yule) 1871 The Travels of Marco Polo (translated by Aldo Ricci from the Italian edition by L. F. Benedetto) 1931 Marco Polo: The Description of the World (translated and edited by A. C. Moule and P. Pelliot) 1938 The Adventures of Marco Polo (translated by Richard J. Walsh) 1948 The Travels of Marco Polo (translated by Robert Latham) 1958 The Travels of Marco Polo (translated by Teresa Waugh from the Italian edition by Maria Bellonci) 1984 SOURCE: "The Epistle Dedicatorie," in The Travels of Marco Polo, edited by N. M. Penzer, translated by John Frampton, The Argonaut Press, 1929, pp. 1-2. [In the following dedication to his 1579 translation of The Travels of Marco Polo, Frampton states his reasons for committing the manuscript to print in English.] To the right worshipfull Mr. Edward Dyar Esquire, Iohn Frampton wisheth prosperous health and felicitie. Having lying by mee in my chamber (righte Worshipful) a translation of the great voiage & lõg trauels of Paulus Venetus the Venetian, manye Merchauntes, Pilots, and Marriners, and others of dyuers degrees,... (The entire section is 555 words.) SOURCE: "Marsden's Marco Polo," in The Quarterly Review, Vol. XXI, No. XLI, January-April, 1819, pp. 177-96. [In the following review, the anonymous critic praises Marsden's edition of Polo's book, provides an overview of the author's life, and comments on the accuracy of the narrative.] 'It might have been expected,' Mr. Marsden says, 'that in ages past, a less tardy progress would have been made in doing justice to the intrinsic merits of a work (whatever were its defects as a composition) that first conveyed to Europeans a distinct idea of the empire of China, and, by shewing its situation together with that of Japan (before entirely unknown) in respect to the great... (The entire section is 8911 words.) SOURCE: An introduction to The Travels of Marco Polo, the Venetian, edited by Thomas Wright, translated by William Marsden, George Bell & Sons, 1890, pp. ix-xx-viii. [In Wright's 1854 introduction to his revision of William Marsden's translation of The Travels of Marco Polo, Wright offers an overview of Polo's travels and discusses the history of Polo's manuscript.] So much has been written on the subject of the celebrated Venetian traveller of the middle ages, Marco Polo, and the authenticity and credibility of his relation have been so well established, that it is now quite unnecessary to enter into this part of the question; but the reader of the... (The entire section is 6944 words.) SOURCE: "Yule's Edition of Marco Polo," in The Edinburgh Review, Vol. CXXXV, No. CCLXXV, January, 1872, pp. 1-36. [In the following excerpt, Rawlinson praises Yule's translation of Polo's book, noting that he blends several earlier texts in his edition in order to best present "what the author said, or would have desired to say."] The publication of Colonel Yule's Marco Polo is an epoch in geographical literature. Never before, perhaps, did a book of travels appear under such exceptionally favourable auspices; an editor of a fine taste and ripe experience, and possessed with a passion for curious medieval research, having found a publisher willing to gratify that... (The entire section is 1562 words.) SOURCE: "The Book of Marco Polo," in The Nation, New York, Vol. XXI, No. 530, August 26, 1875, pp. 135-37, 152-53. [In the essay that follows, Marsh discusses Yule's edition of Polo's book and comments on the traveler's "reputation for veracity" as well as his collaboration with his fellow prisoner Rustichello, here called Rusticiano.] When Marsden published his learned edition of the Travels of Marco Polo in 1818, it was supposed that he had so nearly exhausted all the possible sources of illustration of his author that future editors would find little or no matter for new commentaries. And when in 1865 Pauthier gave to the world a substantially authentic text for the... (The entire section is 3111 words.) SOURCE: "Marco Polo's Explorations and Their Influence upon Columbus," in The New England Magazine, Vol. VI, No. 6, August 1892, pp. 803-15. [In the following excerpt, Margesson briefly comments on the influence Polo's narrative had on Christopher Columbus.] While Columbus never directly mentions Polo, his hopes and fancies and the deeds of his late years are wholly incomprehensible if he had no acquaintance with the writings of the great Venetian. In a Latin version of Marco Polo, printed at Antwerp about 1485, preserved in the Columbina at Seville, there are marginal notes in the handwriting of Columbus, and he may have become familiar with the work while living in... (The entire section is 424 words.) SOURCE: An introduction to Dawn of Modern Geography: A History of Exploration and Geographical Science, Vol. III, Oxford at the Clarendon Press, 1906, pp. 1-14. [In the following excerpt, Beazley provides an overview of the surge in geographic exploration that occurred from the mid-thirteenth to the early years of the fifteenth century—providing context for Polo's explorations.] Our conquest of the world we live in has a long history; in that history there are many important epochs, eras in which a vital advance was made, wherein the whole course of events was modified; but among such epochs there are few of greater importance, of deeper suggestiveness, and of more... (The entire section is 3535 words.) SOURCE: An introduction to The Travels of Marco Polo, edited by N. M. Penzer, translated by John Frampton, The Argonaut Press, 1929, pp. xi-lx. [In the following excerpt, Penzer provides a detailed analysis of the history of the Polian manuscripts.] The existence of an Elizabethan translation of the Travels of Marco Polo will probably come as a surprise to the majority of readers. This is not to be wondered at when we consider that only three copies of the work in question are known to exist, and that it has never been reprinted. The very rarity of the book would be of itself sufficient excuse for reprinting it, but in the present case there are other... (The entire section is 6958 words.) SOURCE: "Marco Polo and His Book," in Proceedings of the British Academy, Vol. XX, 1934, pp. 181-201. [In the following excerpt from a lecture delivered before the British Academy, Ross gives a brief account of Polo's journey and his narrative, and introduces several new theories regarding Polo's manuscript.] The outstanding geographical event of the thirteenth century was the discovery of the overland route to the Far East. The silk of China had long been known to the West, but the route by which it travelled was unknown, for European merchants had not ventured beyond certain Asiatic ports, whither the silk, like other Oriental wares, was conveyed by caravan.... (The entire section is 5409 words.) SOURCE: "The 'Lost' Toledo Manuscript of Marco Polo," in Speculum, Vol. XII, No. 4, October, 1937, pp. 458-63. [In the following essay, Herriott discusses the superiority of a fifteenth-century Polian manuscript believed to have been lost.] In 1559 the first attempt at a critical edition of Marco Polo appeared in Venice in a volume entitled Secondo volume delle Navigation et Viaggi nel quale si contengono l'Historia delle cose de Tartari, et diuersi fatti de loro Imperatori, descritta da M. Marco Polo Gentilhuomo Venetiano, et da Haiton Armeno. The first volume of this collection of travels had been published in 1550, and the third volume in 1556. The editor of the... (The entire section is 2376 words.) SOURCE: "The Immortal Marco," in The New Statesman & Nation, Vol. XVI, No. 400, October 22, 1938, pp. 606-07. [In the following essay, Power discusses Polo's popular and literary reputation, arguing that his work is "a masterpiece of reporting."] I once knew a master at a famous public school (which shall be nameless) who was under the impression that Marco Polo was a kind of game. I did not question his qualifications for imparting culture to the young, for he had in his day been a noted blue and, as the saying goes, first things come first. But I have been reminded of him by the almost simultaneous appearance of the first two volumes of a magnificent edition of... (The entire section is 1531 words.) SOURCE: "The Literary Precursors," in Marco Polo's Precursors, The Johns Hopkins Press, 1943, pp. 1-15. [In the following essay, Olschki explores the influence of the poetic history of Alexander the Great on Polo's book.] Until about the middle of the thirteenth century, when the first missionaries set out "ad Tartaros," there prevailed in the Western world a profound and persistent ignorance of Central and Eastern Asia, an ignorance partially mitigated by a few vague and generic notions in which remote reminiscences of distant places and peoples were mingled with old poetic and mythical fables. The Tartar invasion of Eastern and Central Europe in 1241 did not alter or... (The entire section is 2783 words.) SOURCE: An introduction to Masterworks of Travel and Exploration: Digests of 13 Great Classics, edited by Richard D. Mallery, Doubleday & Company, Inc., 1948, pp. 3-12. [In the following excerpt, Mallery discusses the appeal of Polo's The Book of Marco Polo in the context of the travel narrative genre.] Travel narratives, through the ages, reflect the character and predilections of the era in which they are composed. Very often they help to determine the special character of the age. They appeal, of course, primarily to that sense of wonder which is found, to a greater or less extent, in all periods. What we know of the fascination exerted upon young and old... (The entire section is 2670 words.) SOURCE: An introduction to The Travels of Marco Polo, translated by Ronald Latham, Penguin Books, 1958, pp. vii-xxix. [In the following excerpt, Latham examines Rusticello's contribution to Polo's book and asserts that, while Polo's observations in other fields tend to be conservative, his remarks on the "human geography" of the places he visited are outstanding.] The book most familiar to English readers as The Travels of Marco Polo was called in the prologue that introduced it to the reading public at the end of the thirteenth century a Description of the World (Divisament dou Monde). It was in fact a description of a surprisingly large part of the world—from the... (The entire section is 3167 words.) SOURCE: "Politics and Religion in Marco Polo's Asia," in Marco Polo's Asia: An Introduction to His "Description of the World" Called "Il milione, " University of California Press, 1960, pp. 178-210. [In the following essay, Olschki analyzes the accuracy of Polo's observations regarding Asian religion and politics in the thirteenth century.] Marco Polo's intention of conferring upon his journey the character of a religious mission is immediately evident in the first part of his book. Ecclesiastical and pious motives abound, from the moment when the three Venetians procured some oil from the lamp of the Holy Sepulcher in Jerusalem and departed with the Pope's blessing... (The entire section is 8614 words.) SOURCE: "Epilogue," in Marco Polo, Venetian Adventurer, University of Oklahoma Press, 1967, pp. 233-64. [In the following excerpt, Hart examines the impact of Polo's book on the sciences of geography and cartography.] Messer Marco Polo's reputation for veracity as an author suffered greatly during his lifetime, for his contemporaries (with very few exceptions) could not and did not accept his book seriously. Their ignorance and bigotry, their belief in and dependence on the ecclesiastical pseudogeography of the day, their preconceived ideas of the unvisited parts of the earth, as well as the inherited legends and utter nonsense to which the medieval mind clung with a... (The entire section is 1387 words.) SOURCE: "Merchant and Missionary Travels," in The Witness and the Other World: Exotic European Travel Writing, 400-1600, Cornell, 1988, pp. 87-121. [In the following excerpt, Campbell discusses methods of description and narration employed by Polo, suggesting that "the being'' that Polo has given to the East in his book "is the body of the West's desire."] In the works of Marco Polo and the Franciscan friar William of Rubruck, the experiencing narrator born and bred in the pilgrimage accounts meets the fabulous and relatively unprescribed East of Wonders [of the East] and the Alexander romances. One might expect this encounter between the eyewitness and... (The entire section is 9223 words.) Baker, J. N. L. "The Middle Ages." In A History of Geographical Discovery and Exploration, pp. 34-57. New York: Cooper Square Publishers, 1967. Discusses the advances made by Polo, his father, and his uncle in the field of geographical exploration. Brendon, J. A. "Marco Polo." In Great Navigators … Discoverers, pp. 29-38. Freeport, N.Y.: Books for Libraries Press, 1930. Provides an overview of Polo's life and travels. Clark, William R. "Explorers of Old." In Explorers of the World, pp. 10-39. London: Aldus Books, 1964. (The entire section is 227 words.)
http://www.enotes.com/topics/marco-polo/critical-essays
4
Pronunciation of English ⟨th⟩ |History and description of| |Development of vowels| |Development of consonants| In English, the digraph ⟨th⟩ represents in most cases one of two different phonemes: the voiced dental fricative /ð/ (as in this) and the voiceless dental fricative /θ/ (thing). More rarely, it can stand for /t/ (Thailand, Thames) or, in some dialects, even the cluster /tθ/ (eighth). It can also be a sequence rather than a digraph, as in the /t.h/ of lighthouse. - 1 Phonetic realization - 2 Phonology and distribution - 3 History of the English phonemes - 4 History of the digraph - 5 See also - 6 Notes - 7 References In standard English, the phonetic realization of the dental fricative phonemes shows less variation than for many other English consonants. Both are pronounced either interdentally, with the blade of the tongue resting against the lower part of the back of the upper teeth and the tip protruding slightly or alternatively with the tip of the tongue against the back of the upper teeth. The interdental position might also be described as "apico-" or "lamino-dental". These two positions may be free variants, but for some speakers they are complementary allophones, the position behind the teeth being used when the dental fricative stands in proximity to an alveolar fricative, as in clothes (/ðz/) or myths (/θs/). Lip configuration may vary depending on phonetic context. The vocal folds are abducted. The velopharyngeal port is closed. Air forced between tongue surface and cutting edge of the upper teeth (interdental) or inside surface of the teeth (dental) creates audible frictional turbulence. The difference between /θ/ and /ð/ is normally described as a voiceless-voiced contrast, as this is the aspect native speakers are most aware of. However, the two phonemes are also distinguished by other phonetic markers. There is a difference of energy (see: Fortis and lenis), the fortis /θ/ being pronounced with more muscular tension than the lenis /ð/. Also, /θ/ is more strongly aspirated than /ð/, as can be demonstrated by holding a hand a few centimeters in front of the mouth and noticing the differing force of the puff of air created by the articulatory process. As with many English consonants, a process of assimilation can result in the substitution of other speech sounds in certain phonetic environments. Most surprising to native speakers, who do this subconsciously, is the use of [n] and [l] as realisations of /ð/ in the following phrases: - join the army: /ˈdʒɔɪn ðiː ˈɑːmi/ → [ˈdʒɔɪn niː ˈɑːmi] - fail the test: /feɪl ðə ˈtɛst/ → [feɪl lə ˈtɛst] /θ/ and /ð/ can also be lost through elision. In rapid speech, sixths may be pronounced like six. Them may be contracted to 'em, and in this case the contraction is often indicated in writing. - In some areas such as London and northern New Zealand, and in some dialects including African American Vernacular English, many people realise the phonemes /θ/ and /ð/ as [f] and [v], respectively. Although traditionally stigmatised as typical of a Cockney accent, this pronunciation is fairly widespread, especially when immediately surrounded by other fricatives for ease of pronunciation, and has recently been an increasingly noticeable feature of the Estuary English accent of South East England. It has in at least one case been transferred into standard English as a neologism: a bovver boy is a thug, a "boy" who likes "bother" (fights). Joe Brown and his Bruvvers was a Pop group of the 1960s. The song "Fings ain't wot they used t'be" was the title song of a 1959 Cockney comedy. Similarly, a New Zealander from the northernmost parts of the country might state that he or she is from "Norfland". - Note that at least in Cockney, word-initial /ð/ (as opposed to its voiceless counterpart /θ/) can never be labiodental. Instead, it is realized as any of [ð, ð̞, d, l, ʔ], or is dropped altogether. - Many speakers of African American Vernacular English, Caribbean English, Liberian English, Nigerian English, Philadelphia English, and Philippine English (along with other Asian English varieties) pronounce the fricatives /θ, ð/ as alveolar stops [t, d]. Similarly but still distinctly, many speakers of New York City English, Chicago English, Boston English, Indian English, Newfoundland English, and Hiberno-English use the dental stops [t̪, d̪] (typically distinct from alveolar [t, d]) instead of, or in free variation with, [θ, ð]. - In Cockney, the th-stopping may occur in case of word-initial /ð/ (but not its voiceless counterpart /θ/). - In rarer or older varieties of African American Vernacular English, /θ/ may be pronounced [s] after a vowel and before another consonant, as in bathroom [ˈbæsɹum]. - Th-alveolarization is a process that occurs in some African varieties of English where the dental fricatives /θ, ð/ merge with the alveolar fricatives /s, z/. It is an example of assibilation. - It is often parodied as ubiquitous to French- and German-speaking learners of English, but is widespread among many foreign learners of English, because the dental fricative "th" sounds are not very common among world languages. - In many varieties of Scottish English, /θ/ becomes [h] word initially and intervocalically. It is a stage in the process of lenition. - Th-debuccalization occurs mainly in Glasgow and across the Central Belt. A common example is [hɪŋk] for think. This feature is becoming more common in these places over time, but is still variable. In word final position, [θ] is used, as in standard English. - The existence of local [h] for /θ/ in Glasgow complicates the process of th-fronting there, a process which gives [f] for historical /θ/. Unlike in the other dialects with th-fronting, where [f] solely varies with [θ], in Glasgow, the introduction of th-fronting there creates a three-way variant system of [h], [f] and [θ]. - Use of [θ] marks the local educated norms (the regional standard), while use of [h] and [f] instead mark the local non-standard norms. [h] is well known in Glasgow as a vernacular variant of /θ/ when it occurs word-initially and intervocalically, while [f] has only recently risen above the level of social consciousness. - Given that th-fronting is a relatively recent innovation in Glasgow, it was expected that linguists might find evidence for lexical diffusion for [f] and the results found from Glasgow speakers confirm this. The existing and particular lexical distribution of th-debuccalization imposes special constraints on the progress of th-fronting in Glasgow. - In accents with th-debuccalization, the cluster /θr/ becomes [hr] giving these dialects a consonant cluster that doesn't occur in other dialects. The replacement of /θr/ with [hr] leads to pronunciations like: - three - [hri] - throw - [hro] - through, threw - [hrʉ] - thrash - [hraʃ] - thresh - [hrɛʃ] - thrown, throne - [hron] - thread - [hrɛd] - threat - [hrɛt] Children generally learn the less marked phonemes of their native language before the more marked ones. In the case of English-speaking children, /θ/ and /ð/ are often among the last phonemes to be learnt, frequently not being mastered before the age of five. Prior to this age, many children substitute the sounds [f] and [v] respectively. For small children, fought and thought are therefore homophones. As British and American children begin school at age four and five respectively, this means that many are learning to read and write before they have sorted out these sounds, and the infantile pronunciation is frequently reflected in their spelling errors: ve fing for the thing. Children with a lisp, however, have trouble distinguishing /θ/ and /ð/ from /s/ and /z/ respectively in speech, using a single /θ/ or /ð/ pronunciation for both, and may never master the correct sounds without speech therapy. The lisp is a common speech impediment in English. Foreign learners may have parallel problems. In English popular culture the substitution of /z/ for /ð/ is a common way of parodying a French accent, but in fact learners from very many cultural backgrounds have difficulties with English dental fricatives, usually caused by interference with either sibilants or stops. Words with a dental fricative adjacent to an alveolar sibilant, such as clothes, truths, fifths, sixths, anesthetic, etc., are commonly very difficult for foreign learners to pronounce. A popular advertisement for Berlitz language school plays on the difficulties Germans may have with dental fricatives. Phonology and distribution In modern English, /θ/ and /ð/ bear a phonemic relationship to each other, as is demonstrated by the presence of a small number of minimal pairs: thigh:thy, ether:either, teeth:teethe. Thus they are distinct phonemes (units of sound, differences in which can affect meaning), as opposed to allophones (different pronunciations of a phoneme having no effect on meaning). They are distinguished from the neighbouring labiodental fricatives, sibilants and alveolar stops by such minimal pairs as thought:fought/sought/taught and then:Venn/Zen/den. The vast majority of words in English with ⟨th⟩ have /θ/, and almost all newly created words do. However, the constant recurrence of the function words, particularly the, means that /ð/ is nevertheless more frequent in actual use. The distribution pattern may be summed up in the following rule of thumb which is valid in most cases: in initial position we use /θ/ except in certain function words; in medial position we use /ð/ except for certain foreign loan words; and in final position we use /θ/ except in certain verbs. A more detailed explanation follows. - Almost all words beginning with a dental fricative have /θ/. - A small number of common function words (the Middle English anomalies mentioned below) begin with /ð/. The words in this group are: - 1 definite article: the - 4 demonstratives: this, that, these, those - 2 personal pronouns each with multiple forms: thou, thee, thy, thine, thyself; they, them, their, theirs, themselves, themself - 7 adverbs and conjunctions: there, then, than, thus, though, thence, thither (though in America thence and thither may be pronounced with initial /θ/) - Various compound adverbs based on the above words: therefore, thereupon, thereby, thereafter, thenceforth, etc. - A few words have initial ⟨th⟩ for /t/ (e.g. Thomas): see below. - Most native words with medial ⟨th⟩ have /ð/. - Between vowels: heathen, fathom; and the frequent combination -ther-: bother, brother, dither, either, father, Heather, lather, mother, other, rather, slither, southern, together, weather, whether, wither, smithereens; Caruthers, Gaithersburg, Netherlands, Witherspoon, and similar compound names where the first component ends in '-ther' or '-thers'. But Rutherford has either /ð/ or /θ/. - Preceded by /r/: Worthington, farthing, farther, further, northern. - Followed by /r/: brethren. - A few native words have medial /θ/: - The adjective suffix -y normally leaves terminal /θ/ unchanged: earthy, healthy, pithy, stealthy, wealthy; but worthy and swarthy have /ð/. - Compound words in which the first element ends or the second element begins with ⟨th⟩ frequently have /θ/, as these elements would in isolation: bathroom, Southampton; anything, everything, nothing, something. - The only other native words with medial /θ/ would seem to be brothel and Ethel. - Most loan words with medial ⟨th⟩ have /θ/. - From Greek: Agatha, anthem, atheist, Athens, athlete, cathedral, Catherine, Cathy, enthusiasm, ether, ethics, ethnic, lethal, lithium, mathematics, method, methyl, mythical, panther, pathetic, sympathy - From Latin: author, authority (though in Latin these had /t/; see below). Also names borrowed from or via Latin: Bertha, Gothic, Hathaway, Othello, Parthian - From Celtic languages: Arthur (Welsh has /θ/ medially: /ærθɨr/); Abernathy, Abernethy - From Hebrew: Ethan, Jonathan, Bethlehem, Bethany, leviathan, Bethel - From German: Luther, as an anglicized spelling pronunciation (see below). - Loanwords with medial /ð/: - Greek words with the combination -thm-: algorithm, logarithm, rhythm. The word asthma may be pronounced /ˈæzðmə/ or /ˈæsθmə/, though here the ⟨th⟩ is nowadays usually silent. - A few words have medial ⟨th⟩ for /t/ or /th/ (e.g. lighthouse): see below. - Nouns and adjectives - Nouns and adjectives ending in a dental fricative usually have /θ/: bath, breath, cloth, froth, health, hearth, loath, sheath, sooth, tooth/teeth, width, wreath. - Exceptions are usually marked in the spelling with -⟨the⟩: tithe, lathe, lithe with /ð/. - blithe can have either /ð/ or /θ/. booth has /ð/ in England but /θ/ in America. - Verbs ending in a dental fricative usually have /ð/, and are frequently spelled -⟨the⟩: bathe, breathe, clothe, loathe, scathe, scythe, seethe, sheathe, soothe, teethe, tithe, wreathe, writhe. Spelled without ⟨e⟩: mouth (verb) nevertheless has /ð/. - froth has /θ/ whether as a noun or as a verb. - The verb endings -s, -ing, -ed do not change the pronunciation of a ⟨th⟩ in the final position in the stem: bathe has /ð/, therefore so do bathed, bathing, bathes; frothing has /θ/. Likewise clothing used as a noun, scathing as an adjective etc. - The archaic word ending "-eth" has /θ/. - with has either /θ/ or /ð/ (see below), as do its compounds: within, without, outwith, withdraw, withhold, withstand, wherewithal, etc. - Plural ⟨s⟩ after ⟨th⟩ may be realised as either /ðz/ or /θs/: - Some plural nouns ending in ⟨ths⟩, with a preceding vowel, have /ðz/, although the singulars always have /θ/; however a variant in /θs/ will be found for many of these: baths, mouths, oaths, paths, sheaths, truths, wreaths, youths exist in both varieties; clothes always has /ðz/ (if not pronounced /kloʊz/, the traditional pronunciation). - Others have only /θs/: azimuths, breaths, cloths, deaths, faiths, Goths, growths, mammoths, moths, myths, smiths, sloths, zeniths, etc. This includes all words in 'th' preceded by a consonant (earths, hearths, lengths, months, widths, etc.) and all numeric words, whether preceded by vowel or consonant (fourths, fifths, sixths, sevenths, eighths /eɪtθs/, twelfths, fifteenths, twentieths, hundredths /hʌndrədθs/, thousandths). - Booth has /ð/ in the singular and hence /ðz/ in the plural for most speakers in England. In American English it has /θ/ in the singular and /θs/ or /ðz/ in the plural. This pronunciation also prevails in Scotland. In pairs of related words, an alternation between /θ/ and /ð/ is possible, which may be thought of as a kind of consonant mutation. Typically [θ] appears in the singular of a noun, [ð] in the plural and in the related verb: cloth /θ/, clothes /ð/, to clothe /ð/. This is directly comparable to the /s/-/z/ or /f/-/v/ alternation in house, houses or wolf, wolves. It goes back to the allophonic variation in Old English (see below), where it was possible for ⟨þ⟩ to be in final position and thus voiceless in the basic form of a word, but in medial position and voiced in a related form. The loss of inflections then brought the voiced medial consonant to the end of the word. Often a remnant of the old inflection can be seen in the spelling in the form of a silent ⟨e⟩, which may be thought of synchronically as a marker of the voicing. Regional differences in distribution The above discussion follows Daniel Jones' English Pronouncing Dictionary, an authority on standard British English, and Webster's New World College Dictionary, an authority on American English. Usage appears much the same between the two. Regional variation within standard English includes the following: - The final consonant in with is pronounced /θ/ (its original pronunciation) in northern Britain, but /ð/ in the south, though some speakers of Southern British English use /θ/ before a voiceless consonant and /ð/ before a voiced one. A 1993 postal poll of American English speakers showed that 84% use /θ/, while 16% have /ð/ (Shitara 1993). (The variant with /ð/ is presumably a sandhi development.) - In Scottish English, /θ/ is found in many words which have /ð/ further south. The phenomenon of nouns terminating in /θ/ taking plurals in /ðz/ does not occur in the north. Thus the following have /θs/: baths, mouths (noun), truths. Scottish English does have the termination /ðz/ in verb forms, however, such as bathes, mouths (verb), loathes, and also in the noun clothes, which is a special case, as it has to be clearly distinguished from cloths. Scottish English also has /θ/ in with, booth, thence etc., and the Scottish pronunciation of thither, almost uniquely, has both /θ/ and /ð/ in the same word. Where there is an American-British difference, the North of Britain generally agrees with America on this phoneme pair. History of the English phonemes Proto-Indo-European (PIE) had no dental fricatives, but these evolved in the earliest stages of the Germanic languages. In Proto-Germanic, /ð/ and /θ/ were separate phonemes, usually represented in Germanic studies by the symbols *đ and *þ. - *đ (/ð/) was derived by Grimm's law from PIE *dʰ or by Verner's law (i.e. when immediately following an unstressed syllable) from PIE *t. - *þ (/θ/) was derived by Grimm's law from PIE *t. In West Germanic, the Proto-Germanic *đ shifted further to *d, leaving only one dental fricative phoneme. However, a new [ð] appeared as an allophone of /θ/ in medial positions by assimilation of the voicing of the surrounding vowels. [θ] remained in initial and presumably in final positions (though this is uncertain as later terminal devoicing would in any case have eliminated the evidence of final [ð]). This West Germanic phoneme, complete with its distribution of allophones, survived into Old English. In German and Dutch, it shifted to a /d/, the allophonic distinction simply being lost. In German, West Germanic *d shifted to /t/ in what may be thought of as a chain shift, but in Dutch, *þ, *đ and *d merged into a single /d/. The whole complex of Germanic dentals, and the place of the fricatives within it, can be summed up in this table: |PIE||Proto-Germanic||West Germanic||Old English||German||Dutch||Notes| |*t||*þ||*[þ]||[θ]||/d/||/d/||Original *t in initial position, or in final position after a stressed vowel| |*[đ]||[ð]||Original *t in medial position after a stressed vowel| |*đ||*d||/d/||/t/||Original *t after an unstressed vowel| |*dʰ||Original *dʰ in all positions| |*d||*t||*t||/t/||/s/ or /ts/||/t/||Original *d in all positions| Thus English inherited a phoneme /θ/ in positions where other West Germanic languages have /d/ and most other Indo-European languages have /t/: English three, German drei, Latin tres. In Old English, the phoneme /θ/, like all fricative phonemes in the language, had two allophones, one voiced and one voiceless, which were distributed regularly according to phonetic environment. - [ð] (like [v] and [z]) was used between two voiced sounds (either vowels or voiced consonants). - [θ] (like [f] and [s]) was spoken in initial and final position, and also medially if adjacent to another unvoiced consonant. Development up to Modern English |This section does not cite any sources. (July 2010)| The most important development on the way to modern English was the investing of the existing distinction between [ð] and [θ] with phonemic value. Minimal pairs, and hence the phonological independence of the two phones, developed as a result of three main processes. - In early Middle English times, a group of very common function words beginning with /θ/ (the, they, there, etc.) came to be pronounced with /ð/ instead of /θ/. Possibly this was a sandhi development; as these words are frequently found in unstressed positions they can sometimes appear to run on from the preceding word, which may have resulted in the dental fricative being treated as though it were word-internal. - English has borrowed many words from Greek, including a vast number of scientific terms. Where the original Greek had the letter ⟨θ⟩ (theta), English retained the Late Greek pronunciation /θ/, regardless of phonetic environment (thermometer, methyl, etc.). In a few words of Indian origin, such as thug, ⟨th⟩ represents Sanskrit थ (/tʰ/) or ठ (/ʈʰ/), usually pronounced /θ/ (but occasionally /t/) in English. - English has lost its original verb inflections. When the stem of a verb ends with a dental fricative, this was usually followed by a vowel in Old English, and was therefore voiced. It is still voiced in modern English, even though the verb inflection has disappeared leaving the /ð/ at the end of the word. Examples are to bathe, to mouth, to breathe. Other changes which affected these phonemes included a shift /d/ → /ð/ when followed by unstressed suffix -er. Thus Old English fæder became modern English father; likewise mother, gather, hither, together, weather (from mōdor, gaderian, hider, tōgædere, weder). In a reverse process, Old English byrþen and morþor or myþra become burden and murder (compare the obsolete words burthen and murther). Dialectally, the alternation between /d/ and /ð/ sometimes extends to other words, as bladder, ladder, solder with /ð/. On the other hand, some dialects retain original d, and extend it to other words, as brother, further, rather. The Welsh name Llewelyn appears in older English texts as Thlewelyn (Rolls of Parliament (Rotuli parliamentorum) I. 463/1, King Edward I or II), and Fluellen (Shakespeare, Henry V). Th also occurs dialectally for wh, as in thirl, thortleberry, thorl, for whirl, whortleberry, whorl. Conversely, Scots has whaing, whang, white, whittle, for thwaing, thwang, thwite, thwittle. The old verb inflection -eth (Old English -eþ) was replaced by -s (he singeth → he sings), not a sound shift but a completely new inflection, the origin of which is still being debated. Possibilities include a "de-lisping" (since s is easier to pronounce there than th), or displacement by a nonstandard English dialect. History of the digraph ⟨th⟩ for /θ/ and /ð/ Though English speakers take it for granted, the digraph ⟨th⟩ is in fact not an obvious combination for a dental fricative. The origins of this have to do with developments in Greek. Proto-Indo-European had an aspirated /dʱ/ which came into Greek as /tʰ/, spelled with the letter theta. In the Greek of Homer and Plato this was still pronounced /tʰ/, and therefore when Greek words were borrowed into Latin theta was transcribed with ⟨th⟩. Since /tʰ/ sounds like /t/ with a following puff of air, ⟨th⟩ was the logical spelling in the Latin alphabet. By the time of New Testament Greek (koiné), however, the aspirated stop had shifted to a fricative: /tʰ/→/θ/. Thus theta came to have the sound which it still has in Modern Greek, and which it represents in the IPA. From a Latin perspective, the established digraph ⟨th⟩ now represented the voiceless fricative /θ/, and was used thus for English by French-speaking scribes after the Norman Conquest, since they were unfamiliar with the Germanic graphemes ð (eth) and þ (thorn). Likewise, the spelling ⟨th⟩ was used for /θ/ in Old High German prior to the completion of the High German consonant shift, again by analogy with the way Latin represented the Greek sound. The history of the digraphs ⟨ph⟩ for /f/ and ⟨ch⟩ for Scots, Welsh or German /x/ is parallel. ⟨th⟩ for /t/ Since neither /tʰ/ nor /θ/ was a native sound in Latin, the tendency must have emerged early, and at the latest by medieval Latin, to substitute /t/. Thus in many modern languages, including French and German, the ⟨th⟩ digraph is used in Greek loan-words to represent an original /θ/, but is now pronounced /t/: examples are French théâtre, German Theater. In some cases, this etymological ⟨th⟩, which has no remaining significance for pronunciation, has been transferred to words in which there is no etymological justification for it. For example, German Tal ('valley', cognate with English dale) appears in many place-names with an archaic spelling Thal (contrast Neandertal and Neanderthal). The German family names Theuerkauf and Thürnagel are other examples. The German spelling reform of 1901 largely reversed these, but they remain in some proper nouns. Examples of this are also to be found in English, perhaps influenced immediately by French. In some Middle English manuscripts, ⟨th⟩ appears for ⟨t⟩ or ⟨d⟩: tho 'to' or 'do', thyll till, whythe white, thede deed. In Modern English we see it in Esther, Thomas, Thames, thyme, Witham (the town in Essex, not the river in Lincolnshire which is pronounced with /ð/) and the old spelling of Satan as Sathan. In a small number of cases, this spelling later influenced the pronunciation: amaranth, amianthus and author have spelling pronunciations with /θ/, and some English speakers use /θ/ in Neanderthal. ⟨th⟩ for /th/ A few English compound words, such as lightheaded or hothouse, have the letter combination ⟨th⟩ split between the parts, though this is not a digraph. Here, the ⟨t⟩ and ⟨h⟩ are pronounced separately (light-headed) as a cluster of two consonants. Other examples are anthill, goatherd, lighthouse, outhouse, pothead; also in words formed with the suffix -hood: knighthood, and the similarly formed Afrikaans loanword apartheid. In a few place names ending in t+ham the t-h boundary has been lost and become a spelling pronunciation, for example Grantham. - English pronunciation - Received Pronunciation - Spelling pronunciation - Non-native pronunciations of English - English orthography - examples from Collins and Mees p. 103 - In fact, some linguists see 'em as originally a separate word, a remnant of Old English hem, but as the apostrophe shows, it is perceived in modern English as a contraction. See Online Etymology Dictionary. 'em. Retrieved on 18 September 2006. - Wright (1981:137) - Wells (1982:329) - Phonological Features of African American Vernacular English - The American Heritage Dictionary, 1969. - Kenyon, John S.; Knott, Thomas A. (1953) . A Pronouncing Dictionary of American English. Springfield, Mass.: Merriam-Webster. p. 87. ISBN 0-87779-047-7. - Beverley Collins and Inger M. Mees (2003), Practical Phonetics and Phonology, Routledge, ISBN 0-415-26133-3. (2nd edn 2008.) - Shitara, Yuko (1993). "A survey of American pronunciation preferences." Speech Hearing and Language 7: 201–32. - Wells, John C. (1982), Accents of English 2: The British Isles, Cambridge: Cambridge University Press, ISBN 0-521-24224-X - Wright, Peter (1981), Cockney Dialect and Slang, London: B.T. Batsford Ltd.
https://en.wikipedia.org/wiki/Pronunciation_of_English_%E2%9F%A8th%E2%9F%A9
4.125
Strategies, ideas, and instructional guidelines for helping readers develop a deep understanding of the texts that they read - Grades: PreK–K, 1–2, 3–5, 6–8, 9–12 Presents a lesson for reviewing reading comprehension strategies. First year teachers or new teachers will have students apply those strategies toward composing an oral presentation. The anchor text for my Cinderella Unit, the 1812 version of Cinderella by Jacob and Wilhelm Grimm, is challenging, but the content is engaging. I have found that students put more effort into reading challenging text if the topics are engaging. Fairy tales, originally meant for adults, intrigue middle school students. This post includes a download of a SMART Board predictogram activity. A Socratic Seminar allows students to shine while deeply increasing comprehension. Learning about this methodology changed my perspective on teaching and also allowed me to secure a highly successful observation. Several videos, support tools, and a detailed lesson plan are included. Howard Gardner suggests that intelligence encompasses several different components, one of which is music. I use music in my classroom to manage the day and to tap into the talents of those students who are high on the musical intelligence spectrum. One way to engage these students in reading is to use lyrics to teach the difference between the literal and beyond literal meaning of texts. Tips and Strategies Get ideas from teachers and experts on how to deepen reading comprehension in your students. Help your students truly understand the content that they are reading with these helpful tips and strategies. Even for upper grade students, Dr. Seuss can help teach the fantastic power of symbolism while reading. Classic books develop a deeper meaning for us as we grow older and gain life experience -- older students can read his books with new eyes. Who would have figured that Yertle the Turtle represents Adolph Hitler? Discover lesson possibilities, book suggestions, photos, and anchor charts in this blog post. When students read or listen to non-fiction, they must locate details that pertain to the main idea of the selection. Whenever we study a new unit in class, we rely on our prior knowledge and use focus questions as well as text features to help "set the purpose" for what we are preparing to read. This week's entry about locating the main idea and supporting details in a selection will help your students work on the most important skill in reading. It is crucial that we expose our students to nonfiction texts as often as possible. This month I share resources for teaching nonfiction reading concepts, including posters, links to great Web sites and articles, printables, an exciting new way to make current events interactive, and much more! Some critics claim that interactive whiteboards (IWBs) are glorified, expensive projectors. I suppose they are, if they are used as a presentation tool and not as a learning tool that requires student interaction. There are effective ways of implementing an IWB into reading and writing without a lot of time or technological skills.
http://www.scholastic.com/teachers/collection/reading-comprehension
4.15625
A phosphor, most generally, is a substance that exhibits the phenomenon of luminescence. Somewhat confusingly, this includes both phosphorescent materials, which show a slow decay in brightness (> 1 ms), and fluorescent materials, where the emission decay takes place over tens of nanoseconds. Phosphorescent materials are known for their use in radar screens and glow-in-the-dark toys, whereas fluorescent materials are common in cathode ray tube (CRT) and plasma video display screens, sensors, and white LEDs. Phosphors are often transition metal compounds or rare earth compounds of various types. The most common uses of phosphors are in CRT displays and fluorescent lights. CRT phosphors were standardized beginning around World War II and designated by the letter "P" followed by a number. - 1 Principles - 2 Materials - 3 Applications - 4 Standard phosphor types - 5 See also - 6 References - 7 Bibliography - 8 External links A material can emit light either through incandescence, where all atoms radiate, or by luminescence, where only a small fraction of atoms, called emission centers or luminescence centers, emit light. In inorganic phosphors, these inhomogeneities in the crystal structure are created usually by addition of a trace amount of dopants, impurities called activators. (In rare cases dislocations or other crystal defects can play the role of the impurity.) The wavelength emitted by the emission center is dependent on the atom itself, and on the surrounding crystal structure. The scintillation process in inorganic materials is due to the electronic band structure found in the crystals. An incoming particle can excite an electron from the valence band to either the conduction band or the exciton band (located just below the conduction band and separated from the valence band by an energy gap). This leaves an associated hole behind, in the valence band. Impurities create electronic levels in the forbidden gap. The excitons are loosely bound electron-hole pairs that wander through the crystal lattice until they are captured as a whole by impurity centers. The latter then rapidly de-excite by emitting scintillation light (fast component). In case of inorganic scintillators, the activator impurities are typically chosen so that the emitted light is in the visible range or near-UV where photomultipliers are effective. The holes associated with electrons in the conduction band are independent from the latter. Those holes and electrons are captured successively by impurity centers exciting certain metastable states not accessible to the excitons. The delayed de-excitation of those metastable impurity states, slowed down by reliance on the low-probability forbidden mechanism, again results in light emission (slow component). Many phosphors tend to lose efficiency gradually by several mechanisms. The activators can undergo change of valence (usually oxidation), the crystal lattice degrades, atoms – often the activators – diffuse through the material, the surface undergoes chemical reactions with the environment with consequent loss of efficiency or buildup of a layer absorbing either the exciting or the radiated energy, etc. The degradation of electroluminescent devices depends on frequency of driving current, the luminance level, and temperature; moisture impairs phosphor lifetime very noticeably as well. Harder, high-melting, water-insoluble materials display lower tendency to lose luminescence under operation. - BaMgAl10O17:Eu2+ (BAM), a plasma display phosphor, undergoes oxidation of the dopant during baking. Three mechanisms are involved; absorption of oxygen atoms into oxygen vacancies on the crystal surface, diffusion of Eu(II) along the conductive layer, and electron transfer from Eu(II) to adsorbed oxygen atoms, leading to formation of Eu(III) with corresponding loss of emissivity. Thin coating of aluminium phosphate or lanthanum(III) phosphate is effective in creation a barrier layer blocking access of oxygen to the BAM phosphor, for the cost of reduction of phosphor efficiency. Addition of hydrogen, acting as a reducing agent, to argon in the plasma displays significantly extends the lifetime of BAM:Eu2+ phosphor, by reducing the Eu(III) atoms back to Eu(II). - Y2O3:Eu phosphors under electron bombardment in presence of oxygen form a non-phosphorescent layer on the surface, where electron-hole pairs recombine nonradiatively via surface states. - ZnS:Mn, used in AC thin film electroluminescent (ACTFEL) devices degrades mainly due to formation of deep-level traps, by reaction of water molecules with the dopant; the traps act as centers for nonradiative recombination. The traps also damage the crystal lattice. Phosphor aging leads to decreased brightness and elevated threshold voltage. - ZnS-based phosphors in CRTs and FEDs degrade by surface excitation, coulombic damage, build-up of electric charge, and thermal quenching. Electron-stimulated reactions of the surface are directly correlated to loss of brightness. The electrons dissociate impurities in the environment, the reactive oxygen species then attack the surface and form carbon monoxide and carbon dioxide with traces of carbon, and nonradiative zinc oxide and zinc sulfate on the surface; the reactive hydrogen removes sulfur from the surface as hydrogen sulfide, forming nonradiative layer of metallic zinc. Sulfur can be also removed as sulfur oxides. - ZnS and CdS phosphors degrade by reduction of the metal ions by captured electrons. The M2+ ions are reduced to M+; two M+ then exchange an electron and become one M2+ and one neutral M atom. The reduced metal can be observed as a visible darkening of the phosphor layer. The darkening (and the brightness loss) is proportional to the phosphor's exposure to electrons, and can be observed on some CRT screens that displayed the same image (e.g. a terminal login screen) for prolonged periods. - Europium(II)-doped alkaline earth aluminates degrade by formation of color centers. 5:Ce3+ degrades by loss of luminescent Ce3+ ions. 4:Mn (P1) degrades by desorption of oxygen under electron bombardment. - Oxide phosphors can degrade rapidly in presence of fluoride ions, remaining from incomplete removal of flux from phosphor synthesis. - Loosely packed phosphors, e.g. when an excess of silica gel (formed from the potassium silicate binder) is present, have tendency to locally overheat due to poor thermal conductivity. E.g. InBO 3:Tb3+ is subject to accelerated degradation at higher temperatures. Phosphors are usually made from a suitable host material with an added activator. The best known type is a copper-activated zinc sulfide and the silver-activated zinc sulfide (zinc sulfide silver). The host materials are typically oxides, nitrides and oxynitrides, sulfides, selenides, halides or silicates of zinc, cadmium, manganese, aluminium, silicon, or various rare earth metals. The activators prolong the emission time (afterglow). In turn, other materials (such as nickel) can be used to quench the afterglow and shorten the decay part of the phosphor emission characteristics. Many phosphor powders are produced in low-temperature processes, such as sol-gel and usually require post-annealing at temperatures of ~1000 °C, which is undesirable for many applications. However, proper optimization of the growth process allows to avoid the annealing. Phosphors used for fluorescent lamps require a multi-step production process, with details that vary depending on the particular phosphor. Bulk material must be milled to obtain a desired particle size range, since large particles produce a poor quality lamp coating and small particles produce less light and degrade more quickly. During the firing of the phosphor, process conditions must be controlled to prevent oxidation of the phosphor activators or contamination from the process vessels. After milling the phosphor may be washed to remove minor excess of activator elements. Volatile elements must not be allowed to escape during processing. Lamp manufacturers have changed composition of phosphors to eliminate some toxic elements, such as beryllium, cadmium, or thallium, formerly used. The commonly quoted parameters for phosphors are the wavelength of emission maximum (in nanometers, or alternatively color temperature in kelvins for white blends), the peak width (in nanometers at 50% of intensity), and decay time (in seconds). Phosphor layers provide most of the light produced by fluorescent lamps, and are also used to improve the balance of light produced by metal halide lamps. Various neon signs use phosphor layers to produce different colors of light. Electroluminescent displays found, for example, in aircraft instrument panels, use a phosphor layer to produce glare-free illumination or as numeric and graphic display devices. White LED lamps consist of a blue or ultra-violet emitter with a phosphor coating that emits at longer wavelengths, giving a full spectrum of visible light. Phosphor thermometry is a temperature measurement approach that uses the temperature dependence of certain phosphors. For this, a phosphor coating is applied to a surface of interest and, usually, the decay time is the emission parameter that indicates temperature. Because the illumination and detection optics can be situated remotely, the method may be used for moving surfaces such as high speed motor surfaces. Also, phosphor may be applied to the end of an optical fiber as an optical analog of a thermocouple. - Calcium sulfide with strontium sulfide with bismuth as activator, (Ca,Sr)S:Bi, yields blue light with glow times up to 12 hours, red and orange are modifications of the zinc sulfide formula. Red color can be obtained from strontium sulfide. - Zinc sulfide with about 5 ppm of a copper activator is the most common phosphor for the glow-in-the-dark toys and items. It is also called GS phosphor. - Mix of zinc sulfide and cadmium sulfide emit color depending on their ratio; increasing of the CdS content shifts the output color towards longer wavelengths; its persistence ranges between 1–10 hours. - Strontium aluminate activated by europium, SrAl2O4:Eu(II):Dy(III), is a newer material with higher brightness and significantly longer glow persistence; it produces green and aqua hues, where green gives the highest brightness and aqua the longest glow time. SrAl2O4:Eu:Dy is about 10 times brighter, 10 times longer glowing, and 10 times more expensive than ZnS:Cu. The excitation wavelengths for strontium aluminate range from 200 to 450 nm. The wavelength for its green formulation is 520 nm, its blue-green version emits at 505 nm, and the blue one emits at 490 nm. Colors with longer wavelengths can be obtained from the strontium aluminate as well, though for the price of some loss of brightness. In these applications, the phosphor is directly added to the plastic used to mold the toys, or mixed with a binder for use as paints. ZnS:Cu phosphor is used in glow-in-the-dark cosmetic creams frequently used for Halloween make-ups. Generally, the persistence of the phosphor increases as the wavelength increases. See also lightstick for chemiluminescence-based glowing items. Zinc sulfide phosphors are used with radioactive materials, where the phosphor was excited by the alpha- and beta-decaying isotopes, to create luminescent paint for dials of watches and instruments (radium dials). Between 1913 and 1950 radium-228 and radium-226 were used to activate a phosphor made of silver doped zinc sulfide (ZnS:Ag), which gave a greenish glow. The phosphor is not suitable to be used in layers thicker than 25 mg/cm², as the self-absorption of the light then becomes a problem. Furthermore, zinc sulfide undergoes degradation of its crystal lattice structure, leading to gradual loss of brightness significantly faster than the depletion of radium. ZnS:Ag coated spinthariscope screens were used by Ernest Rutherford in his experiments discovering atomic nucleus. Electroluminescence can be exploited in light sources. Such sources typically emit from a large area, which makes them suitable for backlights of LCD displays. The excitation of the phosphor is usually achieved by application of high-intensity electric field, usually with suitable frequency. Current electroluminescent light sources tend to degrade with use, resulting in their relatively short operation lifetimes. ZnS:Cu was the first formulation successfully displaying electroluminescence, tested at 1936 by Georges Destriau in Madame Marie Curie laboratories in Paris. Indium tin oxide (ITO, also known under trade name IndiGlo) composite is used in some Timex watches, though as the electrode material, not as a phosphor itself. "Lighttape" is another trade name of an electroluminescent material, used in electroluminescent light strips. White light-emitting diodes are usually blue InGaN LEDs with a coating of a suitable material. Cerium(III)-doped YAG (YAG:Ce3+, or Y3Al5O12:Ce3+) is often used; it absorbs the light from the blue LED and emits in a broad range from greenish to reddish, with most of output in yellow. This yellow emission combined with the remaining blue emission gives the “white” light, which can be adjusted to color temperature as warm (yellowish) or cold (blueish) white. The pale yellow emission of the Ce3+:YAG can be tuned by substituting the cerium with other rare earth elements such as terbium and gadolinium and can even be further adjusted by substituting some or all of the aluminium in the YAG with gallium. However, this process is not one of phosphorescence. The yellow light is produced by a process known as scintillation, the complete absence of an afterglow being one of the characteristics of the process. Some rare-earth doped Sialons are photoluminescent and can serve as phosphors. Europium(II)-doped β-SiAlON absorbs in ultraviolet and visible light spectrum and emits intense broadband visible emission. Its luminance and color does not change significantly with temperature, due to the temperature-stable crystal structure. It has a great potential as a green down-conversion phosphor for white LEDs; a yellow variant also exists. For white LEDs, a blue LED is used with a yellow phosphor, or with a green and yellow SiAlON phosphor and a red CaAlSiN3-based (CASN) phosphor. White LEDs can also be made by coating near ultraviolet (NUV) emitting LEDs with a mixture of high efficiency europium based red and blue emitting phosphors plus green emitting copper and aluminium doped zinc sulfide (ZnS:Cu,Al). This is a method analogous to the way fluorescent lamps work. Significant part of white LEDs used in general lighting systems can be even now used for data transfer, for example, in systems assisting positioning in closed spaces to facilitate people searching necessary rooms or objects. Cathode ray tubes Cathode ray tubes produce signal-generated light patterns in a (typically) round or rectangular format. Bulky CRTs were used in the black-and-white household television ("TV") sets that became popular in the 1950s, as well as first-generation, tube-based color TVs, and most earlier computer monitors. CRTs have also been widely used in scientific and engineering instrumentation, such as oscilloscopes, usually with a single phosphor color, typically green. White (in black-and-white): The mix of zinc cadmium sulfide and zinc sulfide silver, the ZnS:Ag+(Zn,Cd)S:Ag is the white P4 phosphor used in black and white television CRTs. Red: Yttrium oxide-sulfide activated with europium is used as the red phosphor in color CRTs. The development of color TV took a long time due to the search for a red phosphor. The first red emitting rare earth phosphor, YVO4,Eu3, was introduced by Levine and Palilla as a primary color in television in 1964. In single crystal form, it was used as an excellent polarizer and laser material. Yellow: When mixed with cadmium sulfide, the resulting zinc cadmium sulfide (Zn,Cd)S:Ag, provides strong yellow light. Green: Combination of zinc sulfide with copper, the P31 phosphor or ZnS:Cu, provides green light peaking at 531 nm, with long glow. Blue: Combination of zinc sulfide with few ppm of silver, the ZnS:Ag, when excited by electrons, provides strong blue glow with maximum at 450 nm, with short afterglow with 200 nanosecond duration. It is known as the P22B phosphor. This material, zinc sulfide silver, is still one of the most efficient phosphors in cathode ray tubes. It is used as a blue phosphor in color CRTs. The phosphors are usually poor electrical conductors. This may lead to deposition of residual charge on the screen, effectively decreasing the energy of the impacting electrons due to electrostatic repulsion (an effect known as "sticking"). To eliminate this, a thin layer of aluminium is deposited over the phosphors and connected to the conductive layer inside the tube. This layer also reflects the phosphor light to the desired direction, and protects the phosphor from ion bombardment resulting from an imperfect vacuum. To reduce the image degradation by reflection of ambient light, contrast can be increased by several methods. In addition to black masking of unused areas of screen, the phosphor particles in color screens are coated with pigments of matching color. For example, the red phosphors are coated with ferric oxide (replacing earlier Cd(S,Se) due to cadmium toxicity), blue phosphors can be coated with marine blue (CoO·nAl 3) or ultramarine (Na 2). Green phosphors based on ZnS:Cu do not have to be coated due to their own yellowish color. Standard phosphor types |P1, GJ||Zn2SiO4:Mn (Willemite)||Green||528 nm||40 nm||1-100ms||CRT, Lamp||Oscilloscopes and monochrome monitors| |P3||Zn8:BeSi5O19:Mn||Yellow||602 nm||–||Medium/13ms||CRT||Amber monochrome monitors| |P4||ZnS:Ag+(Zn,Cd)S:Ag||White||565,540 nm||–||Short||CRT||Black and white TV CRTs and display tubes.| |P4 (Cd-free)||ZnS:Ag+ZnS:Cu+Y2O2S:Eu||White||–||–||Short||CRT||Black and white TV CRTs and display tubes, Cd free.| |P4, GE||ZnO:Zn||Green||505 nm||–||1–10µs||VFD||sole phosphor in vacuum fluorescent displays.| |P5||Blue||430 nm||–||Very Short||CRT||Film| |P7||(Zn,Cd)S:Cu||Blue with Yellow persistence||558,440 nm||–||Long||CRT||Radar PPI, old EKG monitors| |P10||KCl||green-absorbing scotophor||–||–||Long||Dark-trace CRTs||Radar screens; turns from translucent white to dark magenta, stays changed until erased by heating or infrared light| |P11, BE||ZnS:Ag,Cl or ZnS:Zn||Blue||460 nm||–||0.01-1 ms||CRT, VFD||Display tubes and VFDs| |P14||Blue with Orange persistence||–||–||Medium/Long||CRT||Radar PPI, old EKG monitors| |P15||ZnO:Zn||Blue-Green||504,391 nm||–||Extremely Short||CRT||Television pickup by flying-spot scanning| |P19, LF||(KF,MgF2):Mn||Orange-Yellow||590 nm||–||Long||CRT||Radar screens| |P20, KA||(Zn,Cd)S:Ag or (Zn,Cd)S:Cu||Yellow-green||555 nm||–||1–100 ms||CRT||Display tubes| |P22R||Y2O2S:Eu+Fe2O3||Red||611 nm||–||Short||CRT||Red phosphor for TV screens| |P22G||ZnS:Cu,Al||Green||530 nm||–||Short||CRT||Green phosphor for TV screens| |P22B||ZnS:Ag+Co-on-Al2O3||Blue||–||–||Short||CRT||Blue phosphor for TV screens| |P26, LC||(KF,MgF2):Mn||Orange||595 nm||–||Long||CRT||Radar screens| |P28, KE||(Zn,Cd)S:Cu,Cl||Yellow||–||–||Medium||CRT||Display tubes| |P31, GH||ZnS:Cu or ZnS:Cu,Ag||Yellowish-green||–||–||0.01-1 ms||CRT||Oscilloscopes| |P33, LD||MgF2:Mn||Orange||590 nm||–||> 1sec||CRT||Radar screens| |P38, LK||(Zn,Mg)F2:Mn||Orange-Yellow||590 nm||–||Long||CRT||Radar screens| |P39, GR||Zn2SiO4:Mn,As||Green||525 nm||–||Long||CRT||Display tubes| |P40, GA||ZnS:Ag+(Zn,Cd)S:Cu||White||–||–||Long||CRT||Display tubes| |P43, GY||Gd2O2S:Tb||Yellow-green||545 nm||–||Medium||CRT||Display tubes, Electronic Portal Imaging Devices (EPIDs) used in radiation therapy linear accelerators for cancer treatment| |P45, WB||Y2O2S:Tb||White||545 nm||–||Short||CRT||Viewfinders| |P46, KG||Y3Al5O12:Ce||Green||530 nm||–||Very short||CRT||Beam-index tube| |P47, BH||Y2SiO5:Ce||Blue||400 nm||–||Very short||CRT||Beam-index tube| |P53, KJ||Y3Al5O12:Tb||Yellow-green||544 nm||–||Short||CRT||Projection tubes| |P55, BM||ZnS:Ag,Al||Blue||450 nm||–||Short||CRT||Projection tubes| |ZnS:Cu,Al or ZnS:Cu,Au,Al||Green||530 nm||–||–||CRT||–| |Y2SiO5:Tb||Green||545 nm||–||–||CRT||Projection tubes| |Y2OS:Tb||Green||545 nm||–||–||CRT||Display tubes| |Y3(Al,Ga)5O12:Ce||Green||520 nm||–||Short||CRT||Beam-index tube| |Y3(Al,Ga)5O12:Tb||Yellow-green||544 nm||–||Short||CRT||Projection tubes| |(Ba,Eu)Mg2Al16O27||Blue||–||–||–||Lamp||Trichromatic fluorescent lamps| |(Ce,Tb)MgAl11O19||Green||546 nm||9 nm||–||Lamp||Trichromatic fluorescent lamps| |BAM||BaMgAl10O17:Eu,Mn||Blue||450 nm||–||–||Lamp, displays||Trichromatic fluorescent lamps| |BaMg2Al16O27:Eu(II)||Blue||450 nm||52 nm||–||Lamp||Trichromatic fluorescent lamps| |BAM||BaMgAl10O17:Eu,Mn||Blue-Green||456 nm,514 nm||–||–||Lamp||–| |BaMg2Al16O27:Eu(II),Mn(II)||Blue-Green||456 nm, 514 nm||50 nm 50%||–||Lamp| |Ce0.67Tb0.33MgAl11O19:Ce,Tb||Green||543 nm||–||–||Lamp||Trichromatic fluorescent lamps| |CaSiO3:Pb,Mn||Orange-Pink||615 nm||83 nm||–||Lamp| |CaWO4 (Scheelite)||Blue||417 nm||–||–||Lamp||–| |CaWO4:Pb||Blue||433 nm/466 nm||111 nm||–||Lamp||Wide bandwidth| |MgWO4||Blue pale||473 nm||118 nm||–||Lamp||Wide bandwidth, deluxe blend component | |(Sr,Eu,Ba,Ca)5(PO4)3Cl||Blue||–||–||–||Lamp||Trichromatic fluorescent lamps| |Sr5Cl(PO4)3:Eu(II)||Blue||447 nm||32 nm||–||Lamp||–| |(Sr,Ca,Ba)10(PO4)6Cl2:Eu||Blue||453 nm||–||–||Lamp||Trichromatic fluorescent lamps| |Sr2P2O7:Sn(II)||Blue||460 nm||98 nm||–||Lamp||Wide bandwidth, deluxe blend component| |Sr6P5BO20:Eu||Blue-Green||480 nm||82 nm||–||Lamp||–| |Ca5F(PO4)3:Sb||Blue||482 nm||117 nm||–||Lamp||Wide bandwidth| |(Ba,Ti)2P2O7:Ti||Blue-Green||494 nm||143 nm||–||Lamp||Wide bandwidth, deluxe blend component | |Sr5F(PO4)3:Sb,Mn||Blue-Green||509 nm||127 nm||–||Lamp||Wide bandwidth| |Sr5F(PO4)3:Sb,Mn||Blue-Green||509 nm||127 nm||–||Lamp||Wide bandwidth| |LaPO4:Ce,Tb||Green||544 nm||–||–||Lamp||Trichromatic fluorescent lamps| |(La,Ce,Tb)PO4||Green||–||–||–||Lamp||Trichromatic fluorescent lamps| |(La,Ce,Tb)PO4:Ce,Tb||Green||546 nm||6 nm||–||Lamp||Trichromatic fluorescent lamps| |(Ca,Zn,Mg)3(PO4)2:Sn||Orange-Pink||610 nm||146 nm||–||Lamp||Wide bandwidth, blend component| |(Sr,Mg)3(PO4)2:Sn||Orange-Pinkish White||626 nm||120 nm||–||Fluorescent Lamps||Wide bandwidth, deluxe blend component| |(Sr,Mg)3(PO4)2:Sn(II)||Orange-Red||630 nm||–||–||Fluorescent Lamps||–| |Ca5F(PO4)3:Sb,Mn||3800K||–||–||–||Fluorescent Lamps||Lite-white blend| |Ca5(F,Cl)(PO4)3:Sb,Mn||White-Cold/Warm||–||–||–||Fluorescent Lamps||2600K to 9900K, for very high output lamps| |(Y,Eu)2O3||Red||–||–||–||Lamp||Trichromatic fluorescent lamps| |Y2O3:Eu(III)||Red||611 nm||4 nm||–||Lamp||Trichromatic fluorescent lamps| |Mg4(F)GeO6:Mn||Red||658 nm||17 nm||–||High Pressure Mercury Lamps||| |YVO4:Eu||Orange-Red||619 nm||–||–||High Pressure Mercury and Metal Halide Lamps||–| |3.5 MgO · 0.5 MgF2 · GeO2 :Mn||Red||655 nm||–||–||Lamp||3.5 MgO · 0.5 MgF2 · GeO2 :Mn| |Mg5As2O11:Mn||Red||660 nm||–||–||High Pressure Mercury Lamps, 1960s||–| |SrAl2O7:Pb||Ultraviolet||313 nm||–||–||Special Fluorescent Lamps for Medical use||Ultraviolet| |CAM||LaMgAl11O19:Ce||Ultraviolet||340 nm||52 nm||–||Black-light Fluorescent Lamps||Ultraviolet| |LAP||LaPO4:Ce||Ultraviolet||320 nm||38 nm||–||Medical and scientific U.V. Lamps||Ultraviolet| |SAC||SrAl12O19:Ce||Ultraviolet||295 nm||34 nm||–||Lamp||Ultraviolet| |SrAl11Si0.75O19:Ce0.15Mn0.15||Green||515 nm||22 nm||–||Lamp||Monochromatic lamps for copiers| |BSP||BaSi2O5:Pb||Ultraviolet||350 nm||40 nm||–||Lamp||Ultraviolet| |SBE||SrB4O7:Eu||Ultraviolet||368 nm||15 nm||–||Lamp||Ultraviolet| |SMS||Sr2MgSi2O7:Pb||Ultraviolet||365 nm||68 nm||–||Lamp||Ultraviolet| |MgGa2O4:Mn(II)||Blue-Green||–||–||–||Lamp||Black light displays| - Gd2O2S:Tb (P43), green (peak at 545 nm), 1.5 ms decay to 10%, low afterglow, high X-ray absorption, for X-ray, neutrons and gamma - Gd2O2S:Eu, red (627 nm), 850 µs decay, afterglow, high X-ray absorption, for X-ray, neutrons and gamma - Gd2O2S:Pr, green (513 nm), 7 µs decay, no afterglow, high X-ray absorption, for X-ray, neutrons and gamma - Gd2O2S:Pr,Ce,F, green (513 nm), 4 µs decay, no afterglow, high X-ray absorption, for X-ray, neutrons and gamma - Y2O2S:Tb (P45), white (545 nm), 1.5 ms decay, low afterglow, for low-energy X-ray - Y2O2S:Eu (P22R), red (627 nm), 850 µs decay, afterglow, for low-energy X-ray - Y2O2S:Pr, white (513 nm), 7 µs decay, no afterglow, for low-energy X-ray - Zn(0.5)Cd(0.4)S:Ag (HS), green (560 nm), 80 µs decay, afterglow, efficient but low-res X-ray - Zn(0.4)Cd(0.6)S:Ag (HSr), red (630 nm), 80 µs decay, afterglow, efficient but low-res X-ray - CdWO4, blue (475 nm), 28 µs decay, no afterglow, intensifying phosphor for X-ray and gamma - CaWO4, blue (410 nm), 20 µs decay, no afterglow, intensifying phosphor for X-ray - MgWO4, white (500 nm), 80 µs decay, no afterglow, intensifying phosphor - Y2SiO5:Ce (P47), blue (400 nm), 120 ns decay, no afterglow, for electrons, suitable for photomultipliers - YAlO3:Ce (YAP), blue (370 nm), 25 ns decay, no afterglow, for electrons, suitable for photomultipliers - Y3Al5O12:Ce (YAG), green (550 nm), 70 ns decay, no afterglow, for electrons, suitable for photomultipliers - Y3(Al,Ga)5O12:Ce (YGG), green (530 nm), 250 ns decay, low afterglow, for electrons, suitable for photomultipliers - CdS:In, green (525 nm), <1 ns decay, no afterglow, ultrafast, for electrons - ZnO:Ga, blue (390 nm), <5 ns decay, no afterglow, ultrafast, for electrons - ZnO:Zn (P15), blue (495 nm), 8 µs decay, no afterglow, for low-energy electrons - (Zn,Cd)S:Cu,Al (P22G), green (565 nm), 35 µs decay, low afterglow, for electrons - ZnS:Cu,Al,Au (P22G), green (540 nm), 35 µs decay, low afterglow, for electrons - ZnCdS:Ag,Cu (P20), green (530 nm), 80 µs decay, low afterglow, for electrons - ZnS:Ag (P11), blue (455 nm), 80 µs decay, low afterglow, for alpha particles and electrons - anthracene, blue (447 nm), 32 ns decay, no afterglow, for alpha particles and electrons - plastic (EJ-212), blue (400 nm), 2.4 ns decay, no afterglow, for alpha particles and electrons - Zn2SiO4:Mn (P1), green (530 nm), 11 ms decay, low afterglow, for electrons - ZnS:Cu (GS), green (520 nm), decay in minutes, long afterglow, for X-rays - NaI:Tl, for X-ray, alpha, and electrons - CsI:Tl, green (545 nm), 5 µs decay, afterglow, for X-ray, alpha, and electrons - 6LiF/ZnS:Ag (ND), blue (455 nm), 80 µs decay, for thermal neutrons - 6LiF/ZnS:Cu,Al,Au (NDg), green (565 nm), 35 µs decay, for neutrons - Emsley, John (2000). The Shocking History of Phosphorus. London: Macmillan. ISBN 0-330-39005-8.. - Peter W. Hawkes (1 October 1990). Advances in electronics and electron physics. Academic Press. pp. 350–. ISBN 978-0-12-014679-6. Retrieved 9 January 2012. - Bizarri, G; Moine, B (2005). "On phosphor degradation mechanism: thermal treatment effects". Journal of Luminescence 113 (3–4): 199. Bibcode:2005JLum..113..199B. doi:10.1016/j.jlumin.2004.09.119. - Lakshmanan, p. 171 - Tanno, Hiroaki; Fukasawa, Takayuki; Zhang, Shuxiu; Shinoda, Tsutae; Kajiyama, Hiroshi (2009). "Lifetime Improvement of BaMgAl10O17:Eu2+Phosphor by Hydrogen Plasma Treatment". Japanese Journal of Applied Physics 48 (9): 092303. Bibcode:2009JaJAP..48i2303T. doi:10.1143/JJAP.48.092303. - Ntwaeaborwa, O. M.; Hillie, K. T.; Swart, H. C. (2004). "Degradation of Y2O3:Eu phosphor powders". Physica Status Solidi (c) 1 (9): 2366. Bibcode:2004PSSCR...1.2366N. doi:10.1002/pssc.200404813. - Wang, Ching-Wu; Sheu, Tong-Ji; Su, Yan-Kuin; Yokoyama, Meiso (1997). "Deep Traps and Mechanism of Brightness Degradation in Mn-doped ZnS Thin-Film Electroluminescent Devices Grown by Metal-Organic Chemical Vapor Deposition". Japanese Journal of Applied Physics 36: 2728. Bibcode:1997JaJAP..36.2728W. doi:10.1143/JJAP.36.2728. - Lakshmanan, pp. 51, 76 - PPT presentation in Polish - Xie, Rong-Jun; Hirosaki, Naoto (2007). "Silicon-based oxynitride and nitride phosphors for white LEDs—A review" (free pdf). Sci. Technol. Adv. Mater. 8 (7–8): 588. Bibcode:2007STAdM...8..588X. doi:10.1016/j.stam.2007.08.005. - Li, Hui-Li; Hirosaki, Naoto; Xie, Rong-Jun; Suehiro, Takayuki; Mitomo, Mamoru (2007). "Fine yellow α-SiAlON:Eu phosphors for white LEDs prepared by the gas-reduction–nitridation method" (free pdf). Sci. Techno. Adv. Mater. 8 (7–8): 601. Bibcode:2007STAdM...8..601L. doi:10.1016/j.stam.2007.09.003. - Raymond Kane, Heinz Sell Revolution in lamps: a chronicle of 50 years of progress (2nd ed.), The Fairmont Press, Inc. 2001 ISBN 0-88173-378-4 . Chapter 5 extensively discusses history, application and manufacturing of phosphors for lamps. - Youn-Gon Park; et al. "Luminescence and temperature dependency of β-SiAlON phosphor". Samsung Electro Mechanics Co. - Hideyoshi Kume, Nikkei Electronics (Sep 15, 2009). "Sharp to Employ White LED Using Sialon". - Hirosaki Naoto; et al. (2005). "New sialon phosphors and white LEDs". Oyo Butsuri 74 (11): 1449. - M.S. Fudin; et al. (2014). "Frequency characteristics of modern LED phosphor materials". Scientific and Technical Journal of Information Technologies, Mechanics and Optics 14 (6): 71. line feed character in |journal=at position 33 (help) - Levine, Albert K.; Palilla, Frank C. (1964). "A new, highly efficient red-emitting cathodoluminescent phosphor (YVO4:Eu) for color television". Applied Physics Letters 5 (6): 118. Bibcode:1964ApPhL...5..118L. doi:10.1063/1.1723611. - Fields, R. A.; Birnbaum, M.; Fincher, C. L. (1987). "Highly efficient Nd:YVO4 diode-laser end-pumped laser". Applied Physics Letters 51 (23): 1885. Bibcode:1987ApPhL..51.1885F. doi:10.1063/1.98500. - Shigeo Shionoya (1999). "VI: Phosphors for cathode ray tubes". Phosphor handbook. Boca Raton, Fla.: CRC Press. ISBN 0-8493-7560-6. - Jankowiak, Patrick. "Cathode Ray Tube Phosphors" (PDF). bunkerofdoom.com. Retrieved 1 May 2012.[unreliable source?] - "Osram Sylvania fluorescent lamps". Retrieved 2009-06-06. - Arunachalam Lakshmanan (2008). Luminescence and Display Phosphors: Phenomena and Applications. Nova Publishers. ISBN 1-60456-018-5. |Look up phosphor in Wiktionary, the free dictionary.| - a history of electroluminescent displays. - Fluorescence, Phosphorescence - CRT Phosphor Characteristics (P numbers) - Composition of CRT phosphors - Safe Phosphors - Silicon-based oxynitride and nitride phosphors for white LEDs—A review - & – RCA Manual, Fluorescent screens (P1 to P24) - Inorganic Phosphors Compositions, Preparation and Optical Properties, William M. Yen and Marvin J. Weber
https://en.wikipedia.org/wiki/Phosphor
4.4375
Multiplying and Dividing Exponents Teacher Resources Find Multiplying and Dividing Exponents educational ideas and activities Showing 1 - 20 of 430 resources Developing the Concept: Exponents and Powers of Ten Here is an exponents lesson plan which invites learners to examine visual examples of multiplication and division using powers of 10. They also practice solving problems that their instructors model. If you are new to teaching these... 5th - 7th Math CCSS: Adaptable What's the Power of a Quotient Rule? What's the definition of the power of a quotient rule? You have a fraction that has variables and exponents in the numerator and the denominator, and the whole thing is to a power. Oh my! Don't cry! You can do this. Once you see the rule... 6 mins 6th - 12th Math Miss Integer Finds Her Properties in Order Access prior knowledge to practice concepts like order of operations and exponents. Your class can play this game as a daily review or as a warm-up activity when needed. They work in groups of four to complete and correct review problems. 4th - 6th Math CCSS: Designed Extending the Definitions of Exponents, Variation 1 Scientist work with negative integer exponents all the time. Here, participants will learn how to relate negative exponents to time and to generate equivalent numerical expressions. Learners will apply the properties of integer exponents... 7th - 9th Math CCSS: Designed How Do You Evaluate an Expression with Exponents? An algebraic expression and a given value for the variable. Use the substitution property of equality to plug in the given value and solve the expression. Be careful and use the order of operations correctly because there is an exponent... 3 mins 7th - 9th Math Multiplying and Dividing in Scientific Notation - Grade 8 Here is really nice set of resources on scientific notation. Eighth and ninth graders explore the concept of multiplying and dividing in scientific notation. In this multiplying and dividing numbers in scientific notation lesson,... 7th - 9th Math CCSS: Adaptable
http://www.lessonplanet.com/lesson-plans/multiplying-and-dividing-exponents
4.125
Last glacial period The last glacial period, popularly known as the Ice Age, was the most recent glacial period within the Quaternary glaciation occurring during the last 100,000 years of the Pleistocene, from approximately 110,000 to 12,000 years ago. Scientists consider this "ice age" to be merely the latest glaciation event in a much larger ice age, one that dates back over two million years and has seen multiple glaciations. During this period, there were several changes between glacier advance and retreat. The Last Glacial Maximum, the maximum extent of glaciation within the last glacial period, was approximately 22,000 years ago. While the general pattern of global cooling and glacier advance was similar, local differences in the development of glacier advance and retreat make it difficult to compare the details from continent to continent (see picture of ice core data below for differences). From the point of view of human archaeology, it falls in the Paleolithic and Mesolithic periods. When the glaciation event started, Homo sapiens were confined to Africa and used tools comparable to those used by Neanderthals in Europe and the Levant and by Homo erectus in Asia. Near the end of the event, Homo sapiens spread into Europe, Asia, and Australia. The retreat of the glaciers allowed groups of Asians to migrate to the Americas and populate them. - 1 Origin and definition - 2 Overview - 3 Named local glaciations - 3.1 Antarctica glaciation - 3.2 Europe - 3.3 North America - 3.4 South America - 4 See also - 5 References - 6 Further reading - 7 External links Origin and definition The last glacial period is sometimes colloquially referred to as the "last ice age", though this use is incorrect because an ice age is a longer period of cold temperature in which ice sheets cover large parts of the Earth, such as Antarctica. Glacials, on the other hand, refer to colder phases within an ice age that separate interglacials. Thus, the end of the last glacial period is not the end of the last ice age. The end of the last glacial period was about 10,500 BCE, while the end of the last ice age has not yet come. Over the past few million years the glacial-interglacial cycle has been "paced" by periodic variations in the Earth's orbit via Milankovitch cycles which are thus the "cause" of ice ages. The last glacial period is the best-known part of the current ice age, and has been intensively studied in North America, northern Eurasia, the Himalaya and other formerly glaciated regions around the world. The glaciations that occurred during this glacial period covered many areas, mainly in the Northern Hemisphere and to a lesser extent in the Southern Hemisphere. They have different names, historically developed and depending on their geographic distributions: Fraser (in the Pacific Cordillera of North America), Pinedale (in the Central Rocky Mountains), Wisconsinan or Wisconsin (in central North America), Devensian (in the British Isles), Midlandian (in Ireland), Würm (in the Alps), Mérida (in Venezuela), Weichselian or Vistulian (in Northern Europe and northern Central Europe), Valdai in Eastern Europe and Zyryanka in Siberia, Llanquihue in Chile, and Otira in New Zealand. The geochronological Late Pleistocene comprises the late glacial (Weichselian) and the immediately preceding penultimate interglacial (Eemian) preiond. The last glaciation centered on the huge ice sheets of North America and Eurasia. Considerable areas in the Alps, the Himalaya and the Andes were ice-covered, and Antarctica remained glaciated. Canada was nearly completely covered by ice, as well as the northern part of the United States, both blanketed by the huge Laurentide ice sheet. Alaska remained mostly ice free due to arid climate conditions. Local glaciations existed in the Rocky Mountains and the Cordilleran ice sheet and as ice fields and ice caps in the Sierra Nevada in northern California. In Britain, mainland Europe, and northwestern Asia, the Scandinavian ice sheet once again reached the northern parts of the British Isles, Germany, Poland, and Russia, extending as far east as the Taimyr Peninsula in western Siberia. The maximum extent of western Siberian glaciation was reached approximately 16,000 to 15,000 BCE and thus later than in Europe (20,000–16,000 BCE). Northeastern Siberia was not covered by a continental-scale ice sheet. Instead, large, but restricted, icefield complexes covered mountain ranges within northeast Siberia, including the Kamchatka-Koryak Mountains. The Arctic Ocean between the huge ice sheets of America and Eurasia was not frozen throughout, but like today probably was only covered by relatively shallow ice, subject to seasonal changes and riddled with icebergs calving from the surrounding ice sheets. According to the sediment composition retrieved from deep-sea cores there must even have been times of seasonally open waters. Outside the main ice sheets, widespread glaciation occurred on the Alps-Himalaya mountain chain. In contrast to the earlier glacial stages, the Würm glaciation was composed of smaller ice caps and mostly confined to valley glaciers, sending glacial lobes into the Alpine foreland. To the east the Caucasus and the mountains of Turkey and Iran were capped by local ice fields or small ice sheets. In the Himalaya and the Tibetan Plateau, glaciers advanced considerably, particularly between 45,000–25,000 BCE, but these datings are controversial. The formation of a contiguous ice sheet on the Tibetan Plateau is controversial. Other areas of the Northern Hemisphere did not bear extensive ice sheets, but local glaciers in high areas. Parts of Taiwan, for example, were repeatedly glaciated between 42,250 and 8,680 BCE as well as the Japanese Alps. In both areas maximum glacier advance occurred between 58,000 and 28,000 BCE (starting roughly during the Toba catastrophe). To a still lesser extent glaciers existed in Africa, for example in the High Atlas, the mountains of Morocco, the Mount Atakor massif in southern Algeria, and several mountains in Ethiopia. In the Southern Hemisphere, an ice cap of several hundred square kilometers was present on the east African mountains in the Kilimanjaro Massif, Mount Kenya and the Ruwenzori Mountains, still bearing remnants of glaciers today. Glaciation of the Southern Hemisphere was less extensive because of current configuration of continents. Ice sheets existed in the Andes (Patagonian Ice Sheet), where six glacier advances between 31,500 and 11,900 BCE in the Chilean Andes have been reported. Antarctica was entirely glaciated, much like today, but the ice sheet left no uncovered area. In mainland Australia only a very small area in the vicinity of Mount Kosciuszko was glaciated, whereas in Tasmania glaciation was more widespread. An ice sheet formed in New Zealand, covering all of the Southern Alps, where at least three glacial advances can be distinguished. Local ice caps existed in Irian Jaya, Indonesia, where in three ice areas remnants of the Pleistocene glaciers are still preserved today. Named local glaciations During the last glacial period Antarctica was blanketed by a massive ice sheet, much as it is today. The ice covered all land areas and extended into the ocean onto the middle and outer continental shelf. According to ice modelling, ice over central East Antarctica was generally thinner than today. Devensian & Midlandian glaciation (Britain and Ireland) The name Devensian glaciation is used by British geologists and archaeologists and refers to what is often popularly meant by the latest Ice Age. Irish geologists, geographers, and archaeologists refer to the Midlandian glaciation as its effects in Ireland are largely visible in the Irish Midlands. The name Devensian is derived from the Latin Dēvenses, people living by the Dee (Dēva in Latin), a river on the Welsh border near which deposits from the period are particularly well represented. The effects of this glaciation can be seen in many geological features of England, Wales, Scotland, and Northern Ireland. Its deposits have been found overlying material from the preceding Ipswichian Stage and lying beneath those from the following Flandrian stage of the Holocene. Alternative names include: Weichsel glaciation or Vistulian glaciation (referring to the Polish river Vistula or its German name Weichsel). Evidence suggests that the ice sheets were at their maximum size for only a short period, between 25,000 to 13,000 BP. Eight interstadials have been recognized in the Weichselian, including: the Oerel, Glinde, Moershoofd, Hengelo and Denekamp; however correlation with isotope stages is still in process. During the glacial maximum in Scandinavia, only the western parts of Jutland were ice-free, and a large part of what is today the North Sea was dry land connecting Jutland with Britain (see Doggerland). It is also in Denmark that the only Scandinavian ice-age animals older than 13,000 BC are found. The Baltic Sea, with its unique brackish water, is a result of meltwater from the Weichsel glaciation combining with saltwater from the North Sea when the straits between Sweden and Denmark opened. Initially, when the ice began melting about 10,300 BP, seawater filled the isostatically depressed area, a temporary marine incursion that geologists dub the Yoldia Sea. Then, as post-glacial isostatic rebound lifted the region about 9500 BP, the deepest basin of the Baltic became a freshwater lake, in palaeological contexts referred to as Ancylus Lake, which is identifiable in the freshwater fauna found in sediment cores. The lake was filled by glacial runoff, but as worldwide sea level continued rising, saltwater again breached the sill about 8000 BP, forming a marine Littorina Sea which was followed by another freshwater phase before the present brackish marine system was established. "At its present state of development, the marine life of the Baltic Sea is less than about 4000 years old," Drs. Thulin and Andrushaitis remarked when reviewing these sequences in 2003. Overlying ice had exerted pressure on the Earth's surface. As a result of melting ice, the land has continued to rise yearly in Scandinavia, mostly in northern Sweden and Finland where the land is rising at a rate of as much as 8–9 mm per year, or 1 meter in 100 years. This is important for archaeologists since a site that was coastal in the Nordic Stone Age now is inland and can be dated by its relative distance from the present shore. Würm glaciation (Alps) The term Würm is derived from a river in the Alpine foreland, approximately marking the maximum glacier advance of this particular glacial period. The Alps were where the first systematic scientific research on ice ages was conducted by Louis Agassiz at the beginning of the 19th century. Here the Würm glaciation of the last glacial period was intensively studied. Pollen analysis, the statistical analyses of microfossilized plant pollens found in geological deposits, chronicled the dramatic changes in the European environment during the Würm glaciation. During the height of Würm glaciation, c. 24,000–10,000 BP, most of western and central Europe and Eurasia was open steppe-tundra, while the Alps presented solid ice fields and montane glaciers. Scandinavia and much of Britain were under ice. During the Würm, the Rhône Glacier covered the whole western Swiss plateau, reaching today's regions of Solothurn and Aarau. In the region of Bern it merged with the Aar glacier. The Rhine Glacier is currently the subject of the most detailed studies. Glaciers of the Reuss and the Limmat advanced sometimes as far as the Jura. Montane and piedmont glaciers formed the land by grinding away virtually all traces of the older Günz and Mindel glaciation, by depositing base moraines and terminal moraines of different retraction phases and loess deposits, and by the pro-glacial rivers' shifting and redepositing gravels. Beneath the surface, they had profound and lasting influence on geothermal heat and the patterns of deep groundwater flow. Pinedale or Fraser glaciation (Rocky Mountains) The Pinedale (central Rocky Mountains) or Fraser (Cordilleran ice sheet) glaciation was the last of the major glaciations to appear in the Rocky Mountains in the United States. The Pinedale lasted from approximately 30,000 to 10,000 years ago and was at its greatest extent between 23,500 and 21,000 years ago. This glaciation was somewhat distinct from the main Wisconsin glaciation as it was only loosely related to the giant ice sheets and was instead composed of mountain glaciers, merging into the Cordilleran Ice Sheet. The Cordilleran ice sheet produced features such as glacial Lake Missoula, which would break free from its ice dam causing the massive Missoula floods. USGS Geologists estimate that the cycle of flooding and reformation of the lake lasted an average of 55 years and that the floods occurred approximately 40 times over the 2,000 year period between 15,000 and 13,000 years ago. Glacial lake outburst floods such as these are not uncommon today in Iceland and other places. The Wisconsin Glacial Episode was the last major advance of continental glaciers in the North American Laurentide ice sheet. At the height of glaciation the Bering land bridge potentially permitted migration of mammals, including people, to North America from Siberia. It radically altered the geography of North America north of the Ohio River. At the height of the Wisconsin Episode glaciation, ice covered most of Canada, the Upper Midwest, and New England, as well as parts of Montana and Washington. On Kelleys Island in Lake Erie or in New York's Central Park, the grooves left by these glaciers can be easily observed. In southwestern Saskatchewan and southeastern Alberta a suture zone between the Laurentide and Cordilleran ice sheets formed the Cypress Hills, which is the northernmost point in North America that remained south of the continental ice sheets. The Great Lakes are the result of glacial scour and pooling of meltwater at the rim of the receding ice. When the enormous mass of the continental ice sheet retreated, the Great Lakes began gradually moving south due to isostatic rebound of the north shore. Niagara Falls is also a product of the glaciation, as is the course of the Ohio River, which largely supplanted the prior Teays River. In its retreat, the Wisconsin Episode glaciation left terminal moraines that form Long Island, Block Island, Cape Cod, Nomans Land, Martha's Vineyard, Nantucket, Sable Island and the Oak Ridges Moraine in south central Ontario, Canada. In Wisconsin itself, it left the Kettle Moraine. The drumlins and eskers formed at its melting edge are landmarks of the Lower Connecticut River Valley. Tahoe, Tenaya, and Tioga, Sierra Nevada In the Sierra Nevada, there are three named stages of glacial maxima (sometimes incorrectly called ice ages) separated by warmer periods. These glacial maxima are called, from oldest to youngest, Tahoe, Tenaya, and Tioga. The Tahoe reached its maximum extent perhaps about 70,000 years ago. Little is known about the Tenaya. The Tioga was the least severe and last of the Wisconsin Episode. It began about 30,000 years ago, reached its greatest advance 21,000 years ago, and ended about 10,000 years ago. In Northwest Greenland, ice coverage attained a very early maximum in the last glacial period around 114,000. After this early maximum, the ice coverage was similar to today until the end of the last glacial period. Towards the end, glaciers readvanced once more before retreating to their present extent. According to ice core data, the Greenland climate was dry during the last glacial period, precipitation reaching perhaps only 20% of today's value. Mérida glaciation (Venezuelan Andes) The name Mérida Glaciation is proposed to designate the alpine glaciation which affected the central Venezuelan Andes during the Late Pleistocene. Two main moraine levels have been recognized: one between 2600 and 2700 m, and another between 3000 and 3500 m elevation. The snow line during the last glacial advance was lowered approximately 1200 m below the present snow line (3700 m). The glaciated area in the Cordillera de Mérida was approximately 600 km2; this included the following high areas from southwest to northeast: Páramo de Tamá, Páramo Batallón, Páramo Los Conejos, Páramo Piedras Blancas, and Teta de Niquitao. Approximately 200 km2 of the total glaciated area was in the Sierra Nevada de Mérida, and of that amount, the largest concentration, 50 km2, was in the areas of Pico Bolívar, Pico Humboldt (4,942 m), and Pico Bonpland (4,893 m). Radiocarbon dating indicates that the moraines are older than 10,000 years B.P., and probably older than 13,000 years B.P. The lower moraine level probably corresponds to the main Wisconsin glacial advance. The upper level probably represents the last glacial advance (Late Wisconsin). Llanquihue glaciation (Southern Andes) The Llanquihue glaciation takes its name from Llanquihue Lake in southern Chile which is a fan-shaped piedmont glacial lake. On the lake's western shores there are large moraine systems of which the innermost belong to the last glacial period. Llanquihue Lake's varves are a node point in southern Chile's varve geochronology. During the last glacial maximum the Patagonian Ice Sheet extended over the Andes from about 35°S to Tierra del Fuego at 55°S. The western part appears to have been very active, with wet basal conditions, while the eastern part was cold based. Cryogenic features like ice wedges, patterned ground, pingos, rock glaciers, palsas, soil cryoturbation, solifluction deposits developed in unglaciated extra-Andean Patagonia during the Last Glaciation. However, not all these reported features have been verified. The area west of Llanquihue Lake was ice-free during the LGM, and had sparsely distributed vegetation dominated by Nothofagus. Valdivian temperate rainforest was reduced to scattered remnants in the western side of the Andes. - Current sea level rise - Glacial history of Minnesota - Glacial lake outburst flood - Glacial period - Ice age - Last Glacial Maximum - Timeline of glaciation - Valparaiso Moraine - Clayton, Lee; Attig, John W.; Mickelson, David M.; Johnson, Mark D.; Syverson, Kent M. "Glaciation of Wisconsin" (PDF). Dept. Geology, University of Wisconsin. - Crowley, Thomas J. (1995). "Ice age terrestrial carbon changes revisited". Global Biogeochemical Cycles 9 (3): 377–389. doi:10.1029/95GB01107. - Clark, D.H. Extent, timing, and climatic significance of latest Pleistocene and Holocene glaciation in the Sierra Nevada, California (PDF 20 Mb) (Ph.D.). Seattle: Washington University. - Möller, P.; et al. (2006). "Severnaya Zemlya, Arctic Russia: a nucleation area for Kara Sea ice sheets during the Middle to Late Quaternary" (PDF 11.5 Mb). Quaternary Science Reviews 25 (21–22): 2894–2936. doi:10.1016/j.quascirev.2006.02.016. - Matti Saarnisto: Climate variability during the last interglacial-glacial cycle in NW Eurasia. Abstracts of PAGES – PEPIII: Past Climate Variability Through Europe and Africa, 2001 - Gualtieri, Lyn; et al. (May 2003). "Pleistocene raised marine deposits on Wrangel Island, northeast Siberia and implications for the presence of an East Siberian ice sheet". Quaternary Research 59 (3): 399–410. doi:10.1016/S0033-5894(03)00057-7. - Ehlers & Gibbard 2004 III, pp. 321–323 - Barr, I.D; Clark, C.D. (2011). "Glaciers and Climate in Pacific Far NE Russia during the Last Glacial Maximum". Journal of Quaternary Science 26 (2): 227. doi:10.1002/jqs.1450. - Spielhagen, Robert F.; et al. (2004). "Arctic Ocean deep-sea record of northern Eurasian ice sheet history". Quaternary Science Reviews 23 (11–13): 1455–83. doi:10.1016/j.quascirev.2003.12.015. - Williams, Jr., Richard S.; Ferrigno, Jane G. (1991). "Glaciers of the Middle East and Africa – Glaciers of Turkey" (PDF 2.5 Mb). U.S.Geological Survey Professional Paper 1386-G-1. Ferrigno, Jane G. (1991). "Glaciers of the Middle East and Africa – Glaciers of Iran" (PDF 1.25 Mb). U.S.Geological Survey Professional Paper 1386-G-2. - Owen, Lewis A.; et al. (2002). "A note on the extent of glaciation throughout the Himalaya during the global Last Glacial Maximum". Quaternary Science Reviews 21 (1): 147–157. doi:10.1016/S0277-3791(01)00104-4. - Kuhle, M., Kuhle, S. (2010): Review on Dating methods: Numerical Dating in the Quaternary of High Asia. In: Journal of Mountain Science (2010) 7: 105-122. - Chevalier, Marie-Luce; et al. (2011). "Constraints on the late Quaternary glaciations in Tibet from cosmogenic exposure ages of moraine surfaces". Quaternary Science Reviews 30: 528–554. doi:10.1016/j.quascirev.2010.11.005. line feed character in |title=at position 81 (help) - Kuhle, Matthias (2002). "A relief-specific model of the ice age on the basis of uplift-controlled glacier areas in Tibet and the corresponding albedo increase as well as their positive climatological feedback by means of the global radiation geometry". Climate Research 20: 1–7. doi:10.3354/cr020001. - Ehlers & Gibbard 2004 III, Kuhle, M. "The High Glacial (Last Ice Age and LGM) ice cover in High and Central Asia". Quaternary Glaciations - Extent and Chronology. pp. 175–199. ISBN 9780444534477. - Lehmkuhl, F. (2003). "Die eiszeitliche Vergletscherung Hochasiens – lokale Vergletscherungen oder übergeordneter Eisschild?". Geographische Rundschau 55 (2): 28–33. - Zhijiu Cui; et al. (2002). "The Quaternary glaciation of Shesan Mountain in Taiwan and glacial classification in monsoon areas". Quaternary International. 97–98: 147–153. doi:10.1016/S1040-6182(02)00060-5. - Yugo Ono; et al. (September–October 2005). "Mountain glaciation in Japan and Taiwan at the global Last Glacial Maximum". Quaternary International. 138–139: 79–92. doi:10.1016/j.quaint.2005.02.007. - Young, James A.T.; Hastenrath, Stefan (1991). "Glaciers of the Middle East and Africa – Glaciers of Africa" (PDF 1.25 Mb). U.S. Geological Survey Professional Paper 1386-G-3. - Lowell, T.V.; et al. (1995). "Interhemisperic correlation of late Pleistocene glacial events" (PDF 2.3 Mb). Science 269 (5230): 1541–9. doi:10.1126/science.269.5230.1541. PMID 17789444. - Ollier, C.D. "Australian Landforms and their History". National Mapping Fab. Geoscience Australia. - Burrows, C. J.; Moar, N. T. (1996). "A mid Otira Glaciation palaeosol and flora from the Castle Hill Basin, Canterbury, New Zealand" (PDF 340 Kb). New Zealand Journal of Botany 34 (4): 539–545. doi:10.1080/0028825X.1996.10410134. - Allison, Ian; Peterson, James A. (1988). Glaciers of Irian Jaya, Indonesia: Observation and Mapping of the Glaciers Shown on Landsat Images. ISBN 0-607-71457-3. U.S. Geological Survey professional paper 1386. - Anderson, J. B.; Shipp, S. S.; Lowe, A. L.; Wellner, J. S.; Mosola, A. B. (2002). "The Antarctic Ice Sheet during the Last Glacial Maximum and its subsequent retreat history: a review". Quaternary Science Reviews 21 (1–3): 49–70. doi:10.1016/S0277-3791(01)00083-X. - Ehlers & Gibbard 2004 III, Ingolfsson, O. Quaternary glacial and climate history of Antarctica (PDF). pp. 3–43. - Huybrechts, P. (2002). "Sea-level changes at the LGM from ice-dynamic reconstructions of the Greenland and Antarctic ice sheets during the glacial cycles". Quaternary Science Reviews 21 (1–3): 203–231. doi:10.1016/S0277-3791(01)00082-8. - Behre, Karl-Ernst and van der Plicht, Johannes (1992) "Towards an absolute chronology for the last glacial period in Europe: radiocarbon dates from Oerel, northern Germany" Vegetation History and Archaeobotany 1(2): pp. 111–117 doi: 10.1007/BF00206091 - Davis, Owen K. (2003) "Non-Marine Records: Correlatiuons withe the Marine Sequence" Introduction to Quaternary Ecology University of Arizona web site, doi: 2003618-145735g - "Brief geologic history". Rocky Mountain National Park. - "Ice Age Floods". U.S. National Park Service. - Waitt, Jr., Richard B. (October 1985). "Case for periodic, colossal jökulhlaups from Pleistocene glacial Lake Missoula". Geological Society of America Bulletin 96 (10): 1271–86. doi:10.1130/0016-7606(1985)96<1271:CFPCJF>2.0.CO;2. - Ehlers & Gibbard 2004 II, p. 57 - Funder, Svend"Late Quaternary stratigraphy and glaciology in the Thule area, Northwest Greenland". MoG Geoscience 22: 63. 1990. - Johnsen, Sigfus J.; et al. (1992). "A "deep" ice core from East Greenland". MoG Geoscience 29: 22. - Schubert, Carlos (1998). "Glaciers of Venezuela". US Geological Survey (USGS P 1386-I). - Schubert, C.; Valastro, S. (1974). "Late Pleistocene glaciation of Páramo de La Culata, north-central Venezuelan Andes" (PDF). Geologische Rundschau 63 (2): 516–538. doi:10.1007/BF01820827. - Mahaney, William C.; Milner, M.W., Kalm, Volli; Dirsowzky, Randy W.; Hancock, R.G.V.; Beukens, Roelf P. (1 April 2008). "Evidence for a Younger Dryas glacial advance in the Andes of northwestern Venezuela". Geomorphology 96 (1–2): 199–211. doi:10.1016/j.geomorph.2007.08.002. - Maximiliano, B.; Orlando, G.; Juan, C.; Ciro, S. "Glacial Quaternary geology of las Gonzales basin, páramo los conejos, Venezuelan andes". - Trombotto Liaudat, Darío (2008). "Geocryology of Southern South America". In Rabassa, J. The Late Cenozoic of Patagonia and Tierra del Fuego. pp. 255–268. ISBN 978-0-444-52954-1. - Adams, Jonathan. "South America during the last 150,000 years". - Bowen, D.Q. (1978). Quaternary geology: a stratigraphic framework for multidisciplinary work. Oxford UK: Pergamon Press. ISBN 978-0-08-020409-3. - Ehlers, J.; Gibbard, P.L., eds. (2004). Quaternary Glaciations: Extent and Chronology 2: Part II North America. Amsterdam: Elsevier. ISBN 0-444-51462-7. - Ehlers, J.; Gibbard, P.L., eds. (2004). Quaternary Glaciations: Extent and Chronology 3: Part III: South America, Asia, Africa, Australia, Antarctica. Amsterdam: Elsevier. ISBN 0-444-51593-3. - Gillespie, A.R., Porter, S.C.; Atwater, B.F. (2004). The Quaternary Period in the United States [of America]. Developments in Quaternary Science 1. Amsterdam: Elsevier. ISBN 978-0-444-51471-4. - Harris, A.G.; Tuttle, E.; Tuttle, S.D. (1997). Geology of National Parks (5th ed.). Iowa: Kendall/Hunt. ISBN 0-7872-5353-7. - Kuhle, M. (1988). "The Pleistocene Glaciation of Tibet and the Onset of Ice Ages — An Autocycle HypothesisGeoJournal". GeoJournal 17 (4): 581–596. doi:10.1007/BF00209444. - Mangerud, J.; Ehlers, J.; Gibbard, P., ed. (2004). Quaternary Glaciations : Extent and Chronology 1: Part I Europe. Amsterdam: Elsevier. ISBN 0-444-51462-7. - Sibrava, V.; Bowen, D.Q; Richmond, G.M. (1986). "Quaternary Glaciations in the Northern Hemisphere". Quaternary Science Reviews 5: 1–514. doi:10.1016/S0277-3791(86)80002-6. - Pielou, E.C. (1991). After the Ice Age : The Return of Life to Glaciated North America. Chicago IL: University Of Chicago Press. ISBN 0-226-66812-6. |Wikimedia Commons has media related to Ice age.| - Pielou, E. C. After the Ice Age: The Return of Life to Glaciated North America (University of Chicago Press: 1992) - National Atlas of the USA: Wisconsin Glaciation in North America: Present state of knowledge - Ray, N.; Adams, J.M. (2001). "A GIS-based Vegetation Map of the World at the Last Glacial Maximum (25,000–15,000 BP)" (PDF). Internet Archaeology 11.
https://en.wikipedia.org/wiki/Last_glacial_period
4.40625
2 Answers | Add Yours To analyze narrative perspective you look for and identify the perspective from which the story is being told and the omniscience or limitedness of information known and conveyed. There are two possible perspectives from which to tell a story: from without the story and from withing the story. There several degrees of knowledge conveyed: only personal knowledge, knowledge of one or more characters, knowledge of all the characters. Let's elaborate on these. If a story is told from a perspective that is without (outside of) the story, the narratorial voice is not a character in the story. The narratorial voice can be thought of as the voice of an oral story teller: someone who recounts a story that is devoid of their own personal involvement. If a story is told from within (inside of) the story, the narratorial voice is a character in the story. The narratorial voice can be thought of as belonging to a character who has a share of the action and conflict and resolution that comprises the story. This may be a central character and is often the main character or it may be a minor character who is a participant and observer--or maybe even just an observer. When the story is told from a narratorial perspective without the story, the narrator may be fully omniscient and know the thoughts, feelings, motives, and emotions of every character and thus be able to reveal anything any character thinks or feels etc. On the other hand, this external type of narrator may be limited in perspective with knowledge of only one or a few of the characters thoughts, feelings etc. Other characters would be reported on based only on their words and actions and visible attitudes--things readily observable to the narrator. When the story is told from a narratorial perspective from within the story, the narrator is limited to what they themselves feel or think or desire. In other words, the only thoughts, feelings, emotions, or motives they know are their own. They also know what they can observe of other character's actions, words, or visible attitudes. They also can know and report what other characters confide to the them of their own inner feelings, thoughts, or motives. So to analyze the narratorial perspective, you look for the location within or without of the narrator and you identify the level of knowledge present. Then you can label the perspective as third person (without the story and using he, she, and it) with limited knowledge, which is called limited third person, or as third person with omniscient knowledge, which is called omniscient third person. Or you can label it as first person (within the story and using I, me, my, mine, we, us, etc as well as he and she etc) with limited knowledge, which is called first person. The narrative perspective determines by whom the story is actually told; most common are a first person narrator, which means the narrator is also a character in the story who gives his or her view on what is happening. As a consequence, you don't always know how other characters think or feel. a third person narrator, which means every character is referred to as 'he' or 'she' or 'they' The narrator is not a character in the story. Because of this, the narrator can give all the information he/she wishes to give. To analyse the perspective you simply look how the story is told. If it is a first person narrator, you try to find out who this person is and whether you think this character is reliable or not. Do you need these questions answered for a particular book or story? We’ve answered 300,966 questions. We can answer yours, too.Ask a question
http://www.enotes.com/homework-help/how-do-analyse-narrative-perspective-whats-271616
4.1875
Tubes of Ice Hold Record of Climate In Past and Future Published: July 20, 1993 (Page 2 of 2) As the great ice sheet began melting some 17,000 years ago, the largest accumulation of water was Lake Agassiz, far larger than Lake Superior and covering much of south-central Canada. As long as ice blocked its drainage east, into the St. Lawrence Valley, Lake Agassiz overflowed into the Mississippi. But at critical times the ice retreated far enough for the lake to flood eastward, reaching the North Atlantic instead of the Gulf of Mexico. Dr. James Kennett at the University of California in Santa Barbara, said such changes were evident in sediment extracted from the Gulf, as well as in the canyons carved by the sudden eastward outpourings of Lake Agassiz. This explanation would not, however, apply to the sudden climate changes that, according to the cores, occurred between the last two ice ages. Flooding of the North Atlantic with fresh water, according to Dr. Wallace Broecker of the Lamont-Doherty Earth Observatory, could interrupt the circulation that brings the Gulf Stream north. The most extreme cooling during the Younger Dryas occurred near the North Atlantic, but is also seen in the Antarctic ice and even, it is reported, in the sediment of the Santa Barbara Channel off California, making it a global event. Another explanation for the sudden temperature changes seen in the Greenland ice cores is large-scale slippages, or "surges," of continental ice into the sea. When some glaciers reach a critical stage, their flow increases many times. It has been proposed that as the bottom of an ice sheet is warmed by heat from the earth's interior, it becomes slushy, allowing the ice to slip. Cores extracted from sediment under the eastern Atlantic have revealed at least five layers of Canadian pebbles, showing that at certain times, many thousands of years apart, North America shed armies of icebergs that almost reached Europe. The American drilling reached bottom this month and is still being analyzed. The European drilling was completed last year and, as reported last Thursday in the journal Nature, the full length of the core has been analyzed, showing temperature history for the past 250,000 years. Ice from an even earlier period, larded with silt and pebbles, has been extracted, but the Europeans expressed concern that layering near the bottom might have been disturbed by motion of the ice over the bedrock. An airborne Danish radar capable of penetrating the ice had shown the rock under both the European and American sites to be relatively flat. Nevertheless, as pointed out by Dr. Mayewski, some movement of the deepest ice seems to have occurred. It is hoped that Russian drilling into the Antarctic ice at Vostok will provide a far longer record, reaching 500,000 years into the past. The ice at Vostok is much thicker and was formed where annual snowfall is minimal. In the drilling, now suspended for the southern winter, ice 160,000 years old has been reached, but thousands of feet remain to be penetrated. The ice samples now in hand may be able to answer many mysteries, including disputes about volcanic eruptions. Microscopic fragments of glass from a specific eruption can be identified and can now be dated by counting annual layers in the ice. A long debated question has been the date of the giant volcanic explosion that wiped out the Minoan city of Thera in the Aegean Sea and may have provided the basis for the Atlantis legend described in Plato's dialogues. Dating of Eruption That event has now been dated by the European drillers at 1645 B.C., with an error margin of seven years. Ash from the eruption has been found deep in the sediment of the eastern Mediterranean and in the Nile delta, leading to speculation that the event could be the basis for the biblical plagues of Egypt. A month ago, American and French scientists reported finding it at four sites in the Black Sea. Its origin in the Thera explosion can be verified by analysis of the chemical and optical properties of the volcanic glass. Dr. Gregory A. Zielinski of the center here, who is analyzing the Greenland cores, said in a telephone interview that because the ice from that period also contained ash from a great Alaskan eruption, he could not be sure of the Thera layer before studying the glass now in hand. Among other great volcanic eruptions tentatively identified in the ice cores is one whose glass shards have also been found at the South Pole. The fact that this material spread to both polar regions may indicate that the volcano was near the Equator. Similarity of the shards to those from a recent eruption of El Chichon in Mexico has led Dr. Zielinski and his colleagues to propose that the source may have been that volcano. Photos: Michael C. Morrison, associate director of the Greenland Ice Sheet Project 2, examining a core sample of ice in a storage van containing samples dating back tens of thousands of years. The van is in a parking lot at the University of New Hampshire in Durham, N.H. This bar represents about 20 years of snowfall. (Tad ackman)
http://www.nytimes.com/1993/07/20/science/tubes-of-ice-hold-record-of-climate-in-past-and-future.html?pagewanted=2&src=pm
4
Definition of Coronavirus Coronavirus: One of a group of RNA viruses, so named because they look like a corona or halo when viewed under the electron microscope. The corona or halo is due to an array of surface projections on the viral envelope. The coronavirus genome is a single strand of RNA 32 kilobases long and is the largest known RNA virus genome. Coronaviruses are also unusual in that they have the highest known frequency of recombination of any positive-strand RNA virus, promiscuously combining genetic information from different sources. Coronaviruses are ubiquitous. They are the second leading cause of the common cold (after the rhinoviruses). Members of the coronavirus family cause major illnesses among animals, including hepatitis (inflammation of the liver) in mice and gastroenteritis (inflammation of the digestive system) in pigs, and respiratory infections (in birds). Soon after the start of the outbreak of SARS (severe acute respiratory syndrome) in 2002-2003, a coronavirus came under suspicion as one of the leading suspects. A new coronavirus was, in fact, discovered to be the agent responsible for SARS. The first coronavirus was isolated in 1937. It was the avian infectious bronchitis virus, which can cause devastating disease in chicken flocks. Since then, related coronaviruses have been found to infect cattle, pigs, horses, turkeys, cats, dogs, rats, and mice. The first human coronavirus was cultured in the 1960s from nasal cavities of people with the common cold. Two human coronaviruses, OC43 and 229E, cause about 30% of common colds. The SARS coronavirus is different and distinct from them and from all other known coronaviruses. Coronaviruses are very unusual viruses. They have a genome of over 30,000 nucleotides and so are gigantic, as viruses go. They are also unusual in how they replicate themselves. Coronaviruses have a two-step replication mechanism. (Many RNA virus genomes contain a single, large gene that is translated by the cellular machinery of the host to produce all viral proteins.) Coronaviruses can contain up to 10 separate genes. Most ribosomes translate the biggest one of these genes, called replicase, which by itself is twice the size of many other RNA viral genomes. The replicase gene produces a series of enzymes that use the rest of the genome as a template to produce a set of smaller, overlapping messenger RNA molecules, which are then translated into the so-called structural proteins -- the building blocks of new viral particles. Last Editorial Review: 6/14/2012 Back to MedTerms online medical dictionary A-Z List Need help identifying pills and medications? - Allergic Skin Disorders - Bacterial Skin Diseases - Bites and Infestations - Diseases of Pigment - Fungal Skin Diseases - Medical Anatomy and Illustrations - Noncancerous, Precancerous & Cancerous Tumors - Oral Health Conditions - Papules, Scales, Plaques and Eruptions - Scalp, Hair and Nails - Sexually Transmitted Diseases (STDs) - Vascular, Lymphatic and Systemic Conditions - Viral Skin Diseases - Additional Skin Conditions
http://www.medicinenet.com/script/main/art.asp?articlekey=22789
4.125
Math and Literature, Grades 6-8 From Quack and Count to Harry Potter, the imaginative ideas in children’s books come to life in math lessons through this unique series. Each resource provides more than 20 classroom-tested lessons that engage children in mathematical problem solving and reasoning. Each lesson features an overview, materials required, and a vignette of how the lesson actually unfolded in a classroom. This book includes a reference chart indicating the mathematical concept each lesson covers, such as number, geometry, patterns, algebra, measurement, data analysis, or probability. What people are saying - Write a review A Drop of Water The Greedy Triangle Harry Potter and the Sorcerers Stone How Big Is a Foot? How Much Is a Million? One Inch Tall Spaghetti and Meatballs for All Tikki Tikki Tembo Whats Faster Than a Speeding Cheetah? The Kings Giraffe angles answered Antarctica ants apprentice’s feet Ask students asked the students bacterium Blackline Masters bottle box-and-whisker plot bubble calculators candy corn cards centimeter-squared paper Children’s Fiction circle graph circumference column Cuisenaire rods cups dents desk determine diameter discuss Dragon Blood Earthshine estimates explained figure find finding finished first fit five floor fluid ounces four fraction gallon graphing calculators groups of three Hagrid half hexagon inch tall Introducing the Investigation king’s feet king’s foot labeled length lesson look Marilyn Burns Math and Literature measure median meters miles per hour multiply Nonfiction number of letters number of scoops number of sides number of triangles pair patterns perimeter Phantom Tollbooth polygon problem radius record relationship responded scatterplot Shadowchild shape Shel Silverstein shoulder span South Georgia Island square strategies string ruler student see Blackline tables tape Tell-Tale Heart toast What’s yardsticks Yeah
https://books.google.com/books?id=yuwdL9eukPYC&dq=related:ISBN0821835009&source=gbs_similarbooks_r&hl=en
4.03125
Edict of Nantes, French Édit De Nantes , law promulgated at Nantes in Brittany on April 13, 1598, by Henry IV of France, which granted a large measure of religious liberty to his Protestant subjects, the Huguenots. The edict was accompanied by Henry IV’s own conversion from Huguenot Calvinism to Roman Catholicism and brought an end to the violent Wars of Religion that began in 1562. The controversial edict was one of the first decrees of religious tolerance in Europe and granted unheard-of religious rights to the French Protestant minority. The edict upheld Protestants in freedom of conscience and permitted them to hold public worship in many parts of the kingdom, though not in Paris. It granted them full civil rights, including access to education, and established a special court, the Chambre de l’Édit, composed of both Protestants and Catholics, to deal with disputes arising from the edict. Protestant pastors were to be paid by the state and released from certain obligations. Militarily, the Protestants could keep the places they were still holding in August 1597 as strongholds, or places de sûreté, for eight years, the expenses of garrisoning them being met by the king. The edict also restored Catholicism in all areas where Catholic practice had been interrupted and made any extension of Protestant worship in France legally impossible. Nevertheless, it was much resented by Pope Clement VIII, by the Roman Catholic clergy in France, and by the parlements. Catholics tended to interpret the edict in its most restrictive sense. The Cardinal de Richelieu, who regarded its political and military clauses as a danger to the state, annulled them by the Peace of Alès in 1629. On October 18, 1685, Louis XIV formally revoked the Edict of Nantes and deprived the French Protestants of all religious and civil liberties. Within a few years, more than 400,000 persecuted Huguenots emigrated—to England, Prussia, Holland, and America—depriving France of its most industrious commercial class.
http://www.britannica.com/event/Edict-of-Nantes
4.1875
August 18, 2011 Fossil Sheds Light On Evolution Of Whales’ Mouths Scientists have identified a critical step in the evolution of filter-feeding whales' enormous mouths. These whales, otherwise known as baleen whales or mysticetes, have feeding adaptations that are unique among mammals, in that they can filter small marine creatures from huge volumes of water. The whales accomplish this by using their "loose" lower jaw joints, which enable them to produce a vast filter-feeding gape.A new study of this ancient jawbone has overturned a long-held belief about how baleen whales evolved, finding that nature's largest mouths likely evolved to suck in large prey rather than to engulf plankton-filled water. The researchers from Australia and the United States found that the fossilized prehistoric jaw differed greatly from the mouths of today's baleen whales. In modern whales, the lower jaw does not unite at the "chin", but instead consists of a specialized jaw joint that allows each side to rotate. By having two curved lower jawbones that rotate in this manner, modern baleen whales are able to create vast gapes to take in large quantities of water and prey. The study provides "compelling evidence that these archaic baleen whales could not expand and rotate their lower jaws, which enables living baleen whales to engulf and expel huge volumes of seawater when filter feeding on krill and other tiny animals," lead researcher Dr. Erich Fitzgerald from the Museum Victoria in Melbourne, Australia, told BBC News. However, it is important to note that the fossilized whale, dubbed Janjucetus hunderi, did have a wide upper jaw, something Dr. Fitzgerald said was the earliest step in the evolution of modern whales' enormous mouths. Dr. Fitzgerald charted the anatomical features of whales on an "evolutionary tree" - from Janjucetus hunderi to today's blue whale. "I was able to discover the sequence of jaw evolution from the earliest whales to the modern giants of the sea," he told BBC News reporter Victoria Gill. The chart showed that "the first step towards the huge mouths of baleen whales may have been increasing the width of the upper jaw [to] suck fish and squid into the mouth one-at-a-time." "The loose lower jaw joint that enables living baleen whales to greatly expand their mouths when filter feeding evolved later." This particular whale was so primitive that it had "ordinary" teeth, and had not yet evolved its comb-like baleen. The fossilized jawbone analyzed in the study was discovered in the 1970s in a coastal town in Victoria, Australia. "I first saw [it] while visiting a private collection in 2008," said Dr. Fitzgerald. "I immediately recognized the characteristic shape of the lower jaws of a whale." Researcher Jeremy Goldbogen from the Cascadia Research Collective in Washington, an expert in the feeding strategies of modern whales, described bulk filter feeding as "one of the most fascinating adaptations in the animal kingdom". "An important point to note is that bulk filter feeding using [rotating jawbones] does not necessarily mean that suction is not used," he told BBC's Gill. "A prime example of this are grey whales which are notorious suction filter feeders," he noted. Dr. Fitzgerald described the whales' mouths as an elegant example of an exaptation, in which a feature evolved to serve a particular function but was later co-opted into a new role. He believes that its wide jaw helped Janjucetus to suck in large singe prey items, such as squid or fish, and didn't evolve for filter-feeding at all. "Charles Darwin reflected upon this in The Origin of Species. He wondered how you could go from a whale that has big teeth like Janjucetus does and catching fish and squid one at a time, to something like a modern Blue Whale that feeds en masse," he said in a press release. "This is the kind of fossil paleontologists dream of finding because it shows a transitional form." "It's an exciting discovery, but actually not as surprising as you might think," he concluded. "Evolution by natural selection implies that we should expect to find these kinds of fossils in the rocks." The findings were published Wednesday in the journal Biology Letters. Image 1: Illustration of the biggest mouth in history at work. The Blue Whale can expand its mouth to gulp huge volumes of krill-filled water. Credit: Carl Buell/Museum Victoria Image 2: The fossilised jaws of Janjucetus, clearly showing the immobile symphysis at the tip. Credit: Jon Augier/Museum Victoria On the Net:
http://www.redorbit.com/news/science/2097590/fossil_sheds_light_on_evolution_of_whales_mouths/
4.09375
Helping Children and Adolescents Cope with Violence and Disasters: What Community Members Can Do Each year, children experience violence and disaster and face other traumas. Young people are injured, they see others harmed by violence, they suffer sexual abuse, and they lose loved ones or witness other tragic and shocking events. Community members—teachers, religious leaders, and other adults—can help children overcome these experiences and start the process of recovery. What is trauma? “Trauma” is often thought of as physical injuries. Psychological trauma is an emotionally painful, shocking, stressful, and sometimes life-threatening experience. It may or may not involve physical injuries, and can result from witnessing distressing events. Examples include a natural disaster, physical or sexual abuse, and terrorism. Disasters such as hurricanes, earthquakes, and floods can claim lives, destroy homes or whole communities, and cause serious physical and psychological injuries. Trauma can also be caused by acts of violence. The September 11, 2001 terrorist attack is one example. Mass shootings in schools or communities and physical or sexual assault are other examples. Traumatic events threaten people’s sense of safety. Reactions (responses) to trauma can be immediate or delayed. Reactions to trauma differ in severity and cover a wide range of behaviors and responses. Children with existing mental health problems, past traumatic experiences, and/or limited family and social supports may be more reactive to trauma. Frequently experienced responses among children after trauma are loss of trust and a fear of the event happening again. It’s important to remember: - Children’s reactions to trauma are strongly influenced by adults’ responses to trauma. - People from different cultures may have their own ways of reacting to trauma. Commonly experienced responses to trauma among children: Children age 5 and under may react in a number of ways including: - Showing signs of fear - Clinging to parent or caregiver - Crying or screaming - Whimpering or trembling - Moving aimlessly - Becoming immobile - Returning to behaviors common to being younger - Being afraid of the dark. Children age 6 to 11 may react by: - Isolating themselves - Becoming quiet around friends, family, and teachers - Having nightmares or other sleep problems - Refusing to go to bed - Becoming irritable or disruptive - Having outbursts of anger - Starting fights - Being unable to concentrate - Refusing to go to school - Complaining of physical problems - Developing unfounded fears - Becoming depressed - Expressing guilt over what happened - Feeling numb emotionally - Doing poorly with school and homework - Loss of interest in fun activities. Adolescents age 12 to 17 may react by: - Having flashbacks to the event (flashbacks are the mind reliving the event) - Having nightmares or other sleep problems - Avoiding reminders of the event - Using or abusing drugs, alcohol, or tobacco - Being disruptive, disrespectful, or behaving destructively - Having physical complaints - Feeling isolated or confused - Being depressed - Being angry - Loss of interest in fun activities - Having suicidal thoughts. Adolescents may feel guilty. They may feel guilt for not preventing injury or deaths. They also may have thoughts of revenge. What can community members do following a traumatic event? Community members play important roles by helping children who experience violence or disaster. They help children cope with trauma and protect them from further trauma exposure. It is important to remember: - Children should be allowed to express their feelings and discuss the event, but not be forced. - Community members should identify and address their own feelings; this may allow them to help others more effectively. - Community members can also use their buildings and institutions as gathering places to promote support. - Community members can help people identify resources and emphasize community strengths and resources that sustain hope. Community members need to be sensitive to: - Difficult behavior - Strong emotions - Different cultural responses. Community members can help in finding mental health professionals to: - Counsel children - Help them see that fears are normal - Offer play therapy - Offer art therapy - Help children develop coping skills, problem-solving skills, and ways to deal with fear. Finally, community members can hold parent meetings to discuss the event, their child’s response, how help is being given to their child, how parents can help their child, and other available support. How can adults help children and adolescents who experienced trauma? Helping children can start immediately, even at the scene of the event. Most children recover within a few weeks of a traumatic experience, while some may need help longer. Grief, a deep emotional response to loss, may take months to resolve. Children may experience grief over the loss of a loved one, teacher, friend, or pet. Grief may be re-experienced or worsened by news reports or the event’s anniversary. Some children may need help from a mental health professional. Some people may seek other kinds of help from community leaders. Identify children who need support and help them obtain it. Examples of problematic behaviors could be: - Refusal to go places that remind them of the event - Emotional numbness - Dangerous behavior - Unexplained anger/rage - Sleep problems including nightmares. Adult helpers should: Pay attention to children - Listen to them - Accept/do not argue about their feelings - Help them cope with the reality of their experiences. Reduce effects of other stressors, such as - Frequent moving or changes in place of residence - Long periods away from family and friends - Pressures to perform well in school - Transportation problems - Fighting within the family - Being hungry. - It takes time - Do not ignore severe reactions - Pay attention to sudden changes in behaviors, speech, language use, or in strong emotions. Remind children that adults - Love them - Support them - Will be with them when possible. Help for all people in the first days and weeks There are steps adults can take following a disaster that can help them cope, making it easier to provide better care for children. These include creating safe conditions, remaining calm and friendly, and connecting with others. Being sensitive to people under stress and respecting their decisions is important. When possible, help people: - Get food - Get a safe place to live - Get help from a doctor or nurse if hurt - Contact loved ones or friends - Keep children with parents or relatives - Understand what happened - Understand what is being done - Know where to get help - Force people to tell their stories - Probe for personal details - Say things like “everything will be OK,” or “at least you survived” - Say what you think people should feel or how people should have acted - Say people suffered because they deserved it - Be negative about available help - Make promises that you can’t keep such as “you will go home soon.” More about trauma and stress Some children will have prolonged mental health problems after a traumatic event. These may include grief, depression, anxiety, and post-traumatic stress disorder (PTSD). Some trauma survivors get better with some support. Others may need prolonged care by a mental health professional. If after a month in a safe environment, children are not able to perform their normal routines or new behavioral or emotional problems develop, then contact a health professional. Factors influencing how one may respond to trauma include: - Being directly involved in the trauma, especially as a victim - Severe and/or prolonged exposure to the event - Personal history of prior trauma - Family or personal history of mental illness and severe behavioral problems - Limited social support; lack of caring family and friends - On-going life stressors such as moving to a new home, or new school, divorce, job change, or financial troubles. Some symptoms may require immediate attention. Contact a mental health professional if these symptoms occur: - Racing heart and sweating - Being easily startled - Being emotionally numb - Being very sad or depressed - Thoughts or actions to end one’s life. Access to disaster help and resources: Centers for Disease Control and Prevention Federal Emergency Management Agency National Center for PTSD The National Child Traumatic Stress Network Substance Abuse and Mental Health Services Administration Disaster Distress Helpline Uniformed Services University of the Health Sciences Center for the Study of Traumatic Stress U.S. Department of Justice Office for Victims of Crime If you or someone you know is in crisis or thinking of suicide, get help quickly. - Call your doctor. - Call 911 for emergency services or go to the nearest emergency room. - Call the toll-free 24-hour hotline of the National Suicide Prevention Lifeline at 1-800-273-TALK (1-800-273-8255); TTY: 1-800-799-4TTY (4889). Where can I find more information? To learn more about trauma among children, visit: For information on clinical trials, visit: For more information on conditions that affect mental health, resources, and research, go to MentalHealth.gov at http://www.mentalhealth.gov , the NIMH website at http://www.nimh.nih.gov, or contact us at: National Institute of Mental Health Office of Science Policy, Planning, and Communications Science Writing, Press, and Dissemination Branch 6001 Executive Boulevard Room 6200, MSC 9663 Bethesda, MD 20892–9663 Phone: 301-443-4513 or 1-866-615-NIMH (6464) toll-free TTY: 301-443-8431 or1-866-415-8051 toll-free This publication is in the public domain and may be reproduced or copied without permission from NIMH. We encourage you to reproduce it and use it in your efforts to improve public health. Citation of the National Institute of Mental Health as a source is appreciated. However, using government materials inappropriately can raise legal or ethical concerns, so we ask you to use these guidelines: - NIMH does not endorse or recommend any commercial products, processes, or services, and our publications may not be used for advertising or endorsement purposes. - NIMH does not provide specific medical advice or treatment recommendations or referrals; our materials may not be used in a manner that has the appearance of such information. - NIMH requests that non-Federal organizations not alter our publications in ways that will jeopardize the integrity and “brand” when using the publication. - Addition of non-Federal Government logos and website links may not have the appearance of NIMH endorsement of any specific commercial products or services or medical treatments or services. If you have questions regarding these guidelines and use of NIMH publications, please contact the NIMH Information Resource Center at 1-866-615-6464 or e-mail at [email protected]. U.S. Department of Health and Human Services National Institutes of Health National Institute of Mental Health NIH Publication No. 14–3519 NIH…Turning Discovery Into Health
http://www.nimh.nih.gov/health/publications/helping-children-and-adolescents-cope-with-violence-and-disasters-community-members/index.shtml
4.09375
Four hundred years ago this week, a previously unseen star suddenly appeared in the night sky. Discovered on Oct. 9, 1604, it was brighter than all other stars. The German astronomer Johannes Kepler studied the star for a year, and wrote a book about it titled "De Stella Nova" ("The New Star"). In the 1940s scientists realized the object was an exploded star, and they called it Kepler's supernova. No supernova in our galaxy has been discovered since the 1604 event. Now the combined efforts of three powerful space observatories have produced a colorful picture of an expanding cloud of gas and dust that is a remnant of the supernova. The image is expected to help astronomers understand these violent and enigmatic events. The scene is about 13,000 light-years away. Last week, NASA announced three bursts of energy in faraway galaxies that might signal stars about to explode. It is how the most massive stars end their lives, and the result is often the formation of a black hole. Spotting such supernovas in advance would be a boon to astronomers, who do not fully understand the death throes of a dying star. Supernovas create all the elements of the universe -- the stuff of planets, plants and people. The stages of the explosions, modeled on computers, have been described as resembling a lava lamp. Meanwhile, instead of observing what actually happens, scientists are left to study the remnants of Kepler's supernova and similar leftovers of relatively nearby explosions. In the new picture, released today, a bubble-shaped shroud of gas and dust 14 light-years wide surrounds the exploded star. The bubble is expanding at 4 million mph (2,000 kilometers per second), astronomers said. It slams into interstellar material, setting up shock waves that agitate molecules and create light of various wavelengths. The image combines data from the Chandra X-ray Observatory, the infrared Spitzer Space Telescope, and visible light collected by the Hubble Space Telescope. The infrared and X-ray data -- invisible to the eye -- have been colorized to make the image useful to astronomers. "Multiwavelength studies are absolutely essential for putting together a complete picture of how supernova remnants evolve," said Ravi Sankrit of Johns Hopkins University. Visible light is shown as yellow, revealing where the supernova shock wave is slamming into the densest regions of surrounding gas. Bright knots are thick clumps of material caused by instabilities that form behind the shock wave, researchers say. Thin filaments show where the shock wave passes through interstellar material that is more uniformly distributed and of lower density. Infrared data, in red, shows microscopic dust particles that have been heated by the shock wave. Blue areas are X-rays that come from very hot gas or extremely high-energy particles squeezed into action. Green represents lower-energy X-rays from cooler gas. "When the analysis is complete, we will be able to answer several important questions about this enigmatic object," said William Blair, also of Johns Hopkins and co-leader of the study with Sankrit. Kepler's supernova remnant is just one of several under study. One thing is clear: Material that a dying star sends into space takes on a variety of dramatic shapes. And interestingly, our own solar system is thought to reside in a huge cavity, riddled with pockets and tunnels all carved out by exploded stars, long ago. Here are some questions and answers related to Kepler's supernova, provided by the Space Telescope Science Institute, which operates Hubble for NASA: How often does a star explode as a supernova? In a typical galaxy like our Milky Way, a supernova pops off about every 100 years. From our earthly vantagepoint, we cannot see every supernova that occurs in our galaxy because interstellar dust obscures our sight. The Kepler supernova, which occurred 400 years ago, is the last supernova seen inside the disk of our Milky Way. So, statistically, we are overdue for witnessing another stellar blast. Curiously, the Kepler supernova was seen to explode 30 years after Tycho Brahe witnessed a stellar explosion in our galaxy. The nearest recent supernova seen was 1987A, which astronomers spied in 1987 in our galactic neighbor, the Large Magellanic Cloud. Why are supernovas important? All stars make heavy chemical elements like carbon and oxygen through a process called nuclear fusion, where lighter elements are fused together to make heavier elements. Many chemical elements heavier than iron, such as gold and uranium, are produced in the heat and pressure of supernova explosions. These heavy elements enrich the interstellar medium, providing the building blocks for stars and planets, like Earth. What kind of star produces a supernova? Two types of stars generate supernovas. The first type, called a type Ia supernova is produced by a star's burned-out core. This stellar relic, called a white dwarf, siphons hydrogen from a companion star, thereby making it 1.4 times more massive than our Sun [called the Chandrasekhar limit]. This excess bulk leads to explosive burning of carbon and other chemical elements that make up the white dwarf. A star that is more than eight times as massive as our Sun generates the second type, called type II. When the star runs out of nuclear fuel, the core collapses. Then the surrounding layers crash onto the core and bounce back, ripping apart the outer layers. The supernova was first seen in 1604. Is that when the star exploded? No, the explosion occurred thousands of years ago, but the light of the explosion only reached Earth in 1604. Why did it take so long for the light to reach us? It has to do with distance. The supernova is about 13,000 light-years away. A light-year is the distance that light can travel in a year -- about 6 trillion miles (10 trillion kilometers). Because the supernova is 13,000 light-years away, it took 13,000 years for light from the exploded star to reach Earth. - Hubble's Story and the Future of Telescopes
http://www.space.com/412-supernova-400-year-explosion-imaged.html
4.21875
Current limiting is the practice in electrical or electronic circuits of imposing an upper limit on the current that may be delivered to a load with the purpose of protecting the circuit generating or transmitting the current from harmful effects due to a short-circuit or similar problem in the load. The simplest form of current limiting for mains is a fuse. As the current exceeds the fuse's limits it blows thereby disconnecting the load from the source. This method is most commonly used for protecting the household mains. A circuit breaker is another device for mains current limiting. Compared to circuit breakers, fuses attain faster current limitation by means of arc quenching. Since fuses are passive elements, they are inherently secure. Their drawback is that once blown, they need to be replaced. Inrush current limiting An inrush current limiter is a device or group of devices used to limit inrush current. Negative temperature coefficient (NTC) thermistors and resistors are two of the simplest options, with cool-down time and power dissipation being their main drawbacks, respectively. More complex solutions can be used when design constraints make simpler options unfeasible. In electronic power circuits ||This article's remainder may require cleanup to meet Wikipedia's quality standards. (February 2011)| Electronic circuits like regulated DC power supplies and power amplifiers employ, in addition to fuses, active current limiting since a fuse alone may not be able to protect the internal devices of the circuit in an over-current or short-circuit situation. A fuse generally is too slow in operation and the time it takes to blow may well be enough to destroy the devices. A typical short-circuit/overload protection scheme is shown in the image. The schematic is representative of a simple protection mechanism employed in regulated DC supplies and class-AB power amplifiers‡. Q1 is the pass or output transistor. Rsens is the load current sensing device. Q2 is the protection transistor which turns on as soon as the voltage across Rsens becomes about 0.65 V. This voltage is determined by the value of Rsens and the load current through it (Iload). When Q2 turns on, it removes base current from Q1 thereby reducing the collector current of Q1. Neglecting the base currents of Q1 and Q2, the collector current of Q1 is also the load current. Thus, Rsens fixes the maximum current to a value given by 0.65/Rsens, for any given output voltage and load resistance. For example, if Rsens = 0.33 Ω, the current is limited to about 2 A even if Rload becomes a short (and Vo becomes zero). With the absence of Q2, Q1 would attempt to drive a very large current (limited only by Rsens, and dependent on the output voltage Vo if Rload is not zero) and the result would be greater power dissipation in Q1. If Rload is zero the dissipation will be much greater (enough to destroy Q1). With Q2 in place, the current is limited and the maximum power dissipation in Q1 is also limited to a safe value (though this is also dependent on Vcc, Rload and current-limited Vo). Further, this power dissipation will remain as long as the overload exists, which means that the devices must be capable of withstanding it for a substantial period. For example, the pass-transistor in a regulated DC power supply system (corresponding to Q1 in the schematic above) rated for 25 V at 1.5 A (with limiting at 2 A) will normally (i.e. with rated load of 1.5 A) dissipate about 7.5 W for a Vcc of 30 V‡‡ (1). With current limiting, the dissipation will increase to about 60 W if the output is shorted‡‡ (2). Without current limiting the dissipation would be greater than 300 W‡‡ (3) - so limiting does have a benefit, but it turns out that the pass-transistor must now be capable of dissipating at least 60 W. In short, an 80-100 W device will be needed (for an expected overload and limiting) where a 10-20 W device (with no chance of shorted load) would have been sufficient. In this technique, beyond the current limit the output voltage will decrease to a value depending on the current limit and load resistance. ‡ – For class-AB stages, the circuit will be mirrored vertically and complementary devices will be used for Q1 & Q2. ‡‡ – The following conditions are considered for determining the power dissipation in Q1, with Vo = 25 V, Iload = 1.5 A (limit at 2 A), Rsens = 0.33 Ω (for limiting at 2A) and Vcc = 30 V — - Normal operation: Vo = 25 V at a load current of 1 A. So Q1 dissipates a power of (30 - 25) V * 1.5 A = 7.5 W. The transistor used must be a 10-20 W device to account for ambient temperature (i.e., derated) and must be mounted on a heat-sink. - Output shorted, with limiting at 2A: The dissipation is given by (30 - 0.65) V * 2 A = 58.7 W. The 0.65 V is the drop across Rsens. In practice, if the power supply Vcc is not able to provide the maximum short-circuit current it will collapse thereby reducing dissipation in Q1. However this is dependent on how "stiff" the supply is. A stiffer supply will sustain the voltage for a heavier current draw before collapsing. Further, the transistor used must be a 80-100 W device to account for ambient temperature (i.e., derated) and must be mounted on a heat-sink. - Output shorted, and no limiting: A shorted load will mean that only Rsens is present as the load. With this, the circuit will attempt to put 25 V across Rsens (0.33 Ω) - here the output voltage has to be measured at the emitter of Q1 since Q1 is connected as an emitter-follower and the lower end of Rsens is effectively grounded due to the short. Thus the load current (and collector current of Q1) becomes nearly 76 A, and the dissipation in Q1 becomes (30 - 25) V * 76 A = 380 W. This is a very large power to dissipate, since in normal circumstances Q1 will only be required to dissipate about 7.5 W (60 W at worst with limiting), and even a 100 W transistor will not withstand a 380 W dissipation. Without Rsens (i.e., Q1 emitter is directly connected to the load) the situation is even worse — Q1 becomes a dead short across 30 V and will draw current limited only by its internal resistance. In practice, the dissipation will be less because the supply (Vcc) will collapse under such a condition. However the dissipation will still be enough to destroy Q1. Single power-supply circuits An issue with the previous circuit is that Q1 will not be saturated unless its base is biased about 0.5 volts above Vcc. The circuits at right and left operate more efficiently from a single (Vcc) supply. In both circuits, R1 allows Q1 to turn on and pass voltage and current to the load. When the current through R_sense exceeds the design limit, Q2 begins to turn on, which in turn begins to turn off Q1, thus limiting the load current. The optional component R2 protects Q2 in the event of a short-circuited load. When Vcc is at least a few volts, a MOSFET can be used for Q1 for lower dropout-voltage. Due to its simplicity, this circuit is sometimes used as a current source for high-power LEDs. Slew rate control Many electronics designers put a small resistor on IC output pins. This slows the edge rate which improves electromagnetic compatibility. Some devices have this "slew rate limiting" output resistor built in; some devices have programmable slew rate limiting. This provides overall slew rate control.
https://en.wikipedia.org/wiki/Current_limiting
4.125
What is sepsis? Sepsis is a serious medical condition that can result in organ damage or death. It happens when the body’s immune system has a severe response to an infection. Sepsis is a medical emergency. It needs to be treated right away. Bacteria, viruses, and fungi can invade your body and cause disease. When your body senses one of these, the immune system responds. Your body releases certain chemicals into the blood that can help fight infection. In some cases, the body has an abnormal and severe response to infection. This can cause inflammation around the body and damage your body’s cells. Blood clots may start to form all over the body. Some blood vessels may start to leak. Blood flow and blood pressure may start to drop. This harms the body’s organs by stopping oxygen and nutrients from reaching them. If this process isn’t stopped, organs in the body can stop working. This can lead to death. Sepsis can be called different things according to how severe it is. Systemic inflammatory response syndrome (SIRS) is the mildest form. Sepsis, severe sepsis, and septic shock are more severe forms. Sepsis is a common cause of death in hospital intensive care units. It can affect people of all ages, but children and older adults are at highest risk. What causes sepsis? Sepsis never happens on its own. It always starts with an infection somewhere in your body, such as: - Lung infection - Urinary tract infection - Skin infection - Abdominal infection (like from appendicitis) Bacteria often cause these infections. Viruses, parasites, and fungi can also cause them and lead to sepsis. In some cases, the bacteria enter the body through a medical device such as a blood vessel catheter. An infection that spreads around the body through the bloodstream is more likely to cause sepsis. An infection in just one part of the body is less likely to lead to sepsis. Sepsis is sometimes called blood poisoning, but this is misleading. Sepsis isn’t caused by poison. Who is at risk for sepsis? Some health problems that impair your ability to fight infection can raise your risk for sepsis, such as: - Liver disease - Severe burns - Conditions that affect the immune system Careful treatment of these health conditions may help reduce the risk of sepsis. What are the symptoms of sepsis? Symptoms and signs of sepsis can include: - Fever or abnormally low temperature - Trouble breathing - Rapid heart rate and breathing rate - Low blood pressure - Signs of reduced blood flow to one or more organs - Less urine The symptoms may vary depending on the severity of the sepsis. These symptoms may be mild at first and then quickly get worse. How is sepsis diagnosed? To diagnose sepsis, a doctor will ask about your medical history and your symptoms. He or she will do a physical exam. Some of the symptoms of early sepsis are the same as other medical conditions. This can make sepsis hard to diagnose in its early stages. An exam of the heart, lungs, and abdomen are needed to help diagnose sepsis. You may also have tests, such as: - Urine tests to look for signs of infection in your urine, and check kidney function - Blood tests to looks for signs of infection in your blood - Imaging tests such as a chest X-ray, computed tomography (CT) scan, or other tests to look for the site of infection A doctor will often diagnose SIRS in a person with certain signs. These include an abnormal body temperature, rapid heart and breathing rate, and abnormal white count but no known source of infection. A doctor can make an official diagnosis of sepsis when these symptoms are present and there is a clear source of infection. These problems plus low blood pressure or low blood flow to one or more organs is severe sepsis. And septic shock is when severe sepsis continues even with very active treatment. How is sepsis treated? Treatment is often done in a hospital’s intensive care unit (ICU). This is because sepsis needs very active care. Vital signs such as heart rate will be constantly watched. Blood and urine tests will be done often. Your condition will be watched and your treatment adjusted as often as needed. The source of the sepsis must be treated. To do this, your doctor will likely use medications. The first treatment may be an antibiotic that works on many types of bacteria. When the exact type of bacteria is known, a different medication may be given. Pockets of infection may need to be drained. These are called abscesses. In some cases, an infected part of the body may need to be removed with surgery. A person with sepsis will also need other types of treatments to help support the body, such as: - Extra oxygen, to keep up normal oxygen levels - Intravenous fluids, to help bring blood pressure and blood flow to organs back to normal - A breathing tube and a ventilator, if the person has trouble breathing - Dialysis, in case of kidney failure - Medications to raise the blood pressure - Other treatments to prevent problems such as deep vein thrombosis and pressure ulcers Most people with mild sepsis do recover. But even with intense treatment, some people die from sepsis. Up to half of all people with severe sepsis will die from it. What are the possible complications of sepsis? Many people survive sepsis without any lasting problems. Other people may have serious problems from sepsis, such as organ damage. Some of possible complications of sepsis include: - Kidney failure - Tissue death (gangrene) of fingers or toes that may require amputation - Permanent lung damage from acute respiratory distress syndrome - Permanent brain damage, which can cause memory problems or more severe symptoms - Later impairment of your immune system, which can increase the risk of future infections - Damage to the heart valves (endocarditis) which can lead to heart failure When should I call the doctor? Call or see a doctor right away if you or someone else has symptoms of sepsis. Early diagnosis and treatment can help improve the chances of a good recovery. Sepsis is a serious medical condition that can result in organ damage or death. It happens when the body’s immune system has a severe response to an infection. - Sepsis is a medical emergency. It needs to be treated right away. - Possible signs and symptoms of sepsis include fever, confusion, trouble breathing, rapid heart rate, and very low blood pressure. - The infection that caused sepsis will be treated first. Health care providers will also treat the symptoms of sepsis with medications, fluids, and breathing support. - Sepsis can cause serious complications. These include kidney failure, gangrene, and death. Tips to help you get the most from a visit to your health care provider: - Before your visit, write down questions you want answered. - Bring someone with you to help you ask questions and remember what your provider tells you. - At the visit, write down the names of new medicines, treatments, or tests, and any new instructions your provider gives you. - If you have a follow-up appointment, write down the date, time, and purpose for that visit. - Know how you can contact your provider if you have questions. Online Medical Reviewer: Finke, Amy, RN, BSN Online Medical Reviewer: newMentor board-certified, academically affiliated clinician Date Last Reviewed: © 2000-2015 The StayWell Company, LLC. 780 Township Line Road, Yardley, PA 19067. All rights reserved. This information is not intended as a substitute for professional medical care. Always follow your healthcare professional's instructions.
http://healthlibrary.brighamandwomens.org/Library/DiseasesConditions/Pediatric/90,P02410
4.125
This release is available in German. The use of uranium as a nuclear fuel and in weapons increases the risk that people may come into contact with it, and the storage of radioactive uranium waste poses an additional environmental risk. However, radioactivity is not the only problem related to contact with uranium; the toxicity of this metal is generally more dangerous to human health. Researchers are still looking for simple, effective methods for the sensitive detection and effective treatment of uranium poisoning. Researchers led by Chuan He at the University of Chicago and Argonne National Laboratory (USA) have now developed a protein that binds to uranium selectively and tightly. As reported in the journal Angewandte Chemie, it is based on a bacterial nickel-binding protein. In oxygen-containing, aqueous environments, uranium normally exists in the form of the uranyl cation (UO22+), a linear molecule made of one uranium atom and two terminal oxygen atoms. The uranyl ion also likes to form coordination complexes. It prefers to surround itself with up to six ligands arranged in a plane around the ion's "equator". The research team thus chose to develop a protein that offers the uranyl ion a binding cavity in which it is surrounded by the protein's side-groups in the manner it prefers. As a template, the scientists used the protein NikR (nickel-responsive repressor) from E. coli, a regulator that reacts to nickel ions. When NikR is loaded with nickel ions, it binds to a special DNA sequence. This represses transcription of the neighboring genes, which code for proteins involved in nickel uptake. If no nickel is present in the bacteria, NikR does not bind to the DNA. The nickel ion is located in a binding cavity in which it is surrounded by a square-planar arrangement of binding groups. By using several mutation steps, the researchers generated a new protein that can bind uranium instead of nickel. Only three amino acids had to be changed. In the specially designed cavity, the uranyl group has six binding partners that surround it equatorially. In addition, there are spaces for the two terminal oxygen atoms of uranyl. This NikR mutant only binds to DNA in the presence of uranyl, not in the presence of nickel or other metal ions. This confirms its selectivity for uranyl and may make it useful for the detection of uranyl and nuclear waste bioremediation. It also represents the first step towards developing potential protein- or peptide-based agents for treatment of uranium poisoning. |Contact: Chuan He|
http://www.bio-medicine.org/biology-news-1/A-pocketful-of-uranium-7056-1/
4.09375
Contributor: C. Peter Chen The last of the major conferences during WW2 was held at Potsdam, code named Terminal. Immediately west of Berlin, President Truman was given a chance to tour the ravaged German capital while he waited for Stalin's arrival (the Russian leader was a day late). The meeting was held at the undamaged Cecilienhof Palace. Stalin's late arrival gave Truman's scientists one extra day to work on the Manhattan Project, and that one extra day seemed to be just enough for Oppenheimer's team to give Truman the resulted he wanted: On the same day that the leaders met at Potsdam, a successful atomic detonation was achieved at New Mexico's desert of Alamogordo under the code name Operation Trinity. By this point, the Americans had learned that Japan wished to end the war, partly by Japan's unrealistic pleas for Moscow to mediate a peace settlement between Japan and the Allied powers. However, the Americans also understood that, if war could not be stopped, many in Japan were prepared to fight to the bitter end, and the losses on both side would be tremendous should landings on the home islands become necessary. Understanding this about Japan, at Potsdam Truman made sure that Stalin would hold true to his promise that Russia would declare war on Japan three months after the surrender of Germany despite the news of the successful test atomic explosion; Truman was keeping his options open. On 26 July, agreements were reached: - Reversion of all German annexations in Europe after 1937 and separation of Austria from Germany. - Statement of aims of the occupation of Germany by the Allies: demilitarization, denazification, democratization and decartelization. - The Potsdam Agreement, which called for the division of Germany and Austria into four occupation zones (agreed on earlier at the Yalta Conference), and the similar division of Berlin and Vienna into four zones. - Agreement on prosecution of Nazi war criminals. - The establishment of the Oder-Neisse line as the provisional border between Germany and Poland. - The expulsion of the German populations remaining outside the borders of Germany. - Agreement on war reparations. The Allies estimated their losses and damages at 200 billion dollars. On insistence of the West, Germany was obliged to pay off only 20 billion in German property, current industry products, and work force (However, the Cold War prevented the full payment). The Potsdam Declaration was also written (by Truman and Churchill, with input from Chiang Kaishek) and was broadcasted to the Japanese people by radio and dropped in pamphlets, both in the Japanese language. It promised "prompt and utter destruction" unless Japan forever renounced militarism, gave up the war criminals, return all conquered territories since 1895, and surrendered unconditionally. Prime Minister Admiral Suzuki, upon hearing the declaration, was purposefully ambigious in his response while the cabinet debated Suzuki was buying time for himself before writing up his official response to Truman, Churchill, and Chiang. However, on the American side, this delay was completely misinterpreted as Japan's arrogance in continuing the war by ignoring the declaration. Historian Dan van der Vat commented: "Seldom can a misconstrued adverbial nuance have had such devastating consequences". Source: the Pacific Campaign. Potsdam Conference Interactive Map Potsdam Conference Timeline |17 Jul 1945||At the Potsdam Conference in Germany, top Allied leadership set up a Control Council to administer occupied Germany.| |18 Jul 1945||In Germany, the second plenary session of the Potsdam Conference was conducted.| |20 Jul 1945||At Potsdam, Germany, Harry Truman declared that the Allies would demand no territory upon victory.| |26 Jul 1945||The Potsdam Ultimatum was issued, threatening Japan with "utter destruction" if it did not surrender unconditionally.| Visitor Submitted Comments All visitor submitted comments are opinions of those making the submissions and do not reflect views of WW2DB. » Alexander, Harold » Arnold, Henry » Attlee, Clement » Bevin, Ernest » Byrnes, James » Churchill, Winston » King, Ernest » Leahy, William » Marshall, George » Molotov, Vyacheslav » Mountbatten, Louis » Stalin, Joseph » Stimson, Henry » Truman, Harry » Directive from US Joint Chiefs of Staff to Eisenhower Regarding the Military Occupation of Germany » Potsdam Conference - » 902 biographies - » 318 events - » 32,461 timeline entries - » 710 ships - » 311 aircraft models - » 177 vehicle models - » 316 weapon models - » 90 historical documents - » 115 facilities - » 403 book reviews - » 22,700 photos - » 270 maps
http://ww2db.com/battle_spec.php?battle_id=81
4.40625
El dia de los muertos Teacher Resources Find El dia de los muertos educational ideas and activities Showing 1 - 20 of 131 resources Dia de los Muertos Educator Resource Guide What are the origins of el Dia de los Muertos, and how is this tradition observed in contemporary celebrations? With a variety of lesson plans and suggested hands-on activities, here is an excellent resource to reference as you prepare... 4th - 7th Social Studies & History CCSS: Adaptable Dia de los Muertos Sugar Skulls Students research information about the Day of the Dead (Dia de los Muertos), a major celebration in Mexican culture, and compare it to similar holidays in other cultures. They discover various folk arts and festive traditions associated... 1st - 12th Social Studies & History Dramatic Day of the Dead Designs Young scholars research customs and activities associated with the Mexican celebration of Dia de los Muertos (Day of the Dead). Students then analyze their favorite aspects of the holiday and represent them in drawings with bilingual... 1st - 6th Social Studies & History What is el Dia de los Muertos? Students explore the Mexican celebration el Dia de los Muertos. In this Mexican celebration lesson, students discuss ways people in the US honor the dead. Students compare and contrast Mexican holidays and American holidays. Students... 4th - 6th Social Studies & History Dia de los Muertos: Celebrating and Remembering Help scholars understand the history, geography, traditions, and art of Dia de los Muertos, the Day of the Dead. Find background information for your reference as well as a detailed cross-curricular lesson plan. Learners compare... K - 2nd Social Studies & History Claycrete Calaveras - Dia de los Muertos Students create skeletons to celebrate the Day of the Dead. In this visual arts lesson, students explore the importance of the Day of the Dead celebrations in the Mexican culture. They create skeletons and decorate them with paint,... 3rd - 7th Visual & Performing Arts Día de los Muertos Teacher Packet Learn about Dia de los Muertos, the Day of the Dead, through authentic vocabulary activities, creating Papel Picado, creating Calavera masks, and making skeleton puppets. Designed to be adaptable to many grade levels, you'll find... K - 12th Social Studies & History Day of the Dead ( Dia de los Muertos) Students examine information about a previously chosen aspect of Day of the Dead and they evaluate a webpage analysis. They create a project about their aspect and prepare a presentation for the class. They complete a self-evaluation... 8th - 10th Social Studies & History
http://www.lessonplanet.com/lesson-plans/el-dia-de-los-muertos
4.25
Relative atomic mass A relative atomic mass (also called atomic weight; symbol: Ar) is a measure of how heavy atoms are. It is the ratio of the average mass per atom of an element from a given sample to 1/12 the mass of a carbon-12 atom. In other words, a relative atomic mass tells you the number of times an average atom of an element from a given sample is heavier than one-twelfth of an atom of carbon-12. The word relative in relative atomic mass refers to this scaling relative to carbon-12. Relative atomic mass values are ratios expressed as dimensionless numbers, numbers with no units.:1 Relative atomic mass is the same as atomic weight, which is the older term. The number of protons an atom has defines what element it is. However, most elements in nature consist of atoms with different numbers of neutrons.:17 An atom of an element with a certain number of neutrons is called an isotope. For example, the element thallium has two common isotopes: thallium-203 and thallium-205. Both isotopes of thallium have 81 protons, but thallium-205 has 124 neutrons, 2 more than thallium-203, which has 122. Each isotope has its own mass, called its isotopic mass. A relative isotopic mass is the mass of an isotope relative to 1/12 the mass of a carbon-12 atom. The relative isotopic mass of an isotope is roughly the same as its mass number, which is the number of protons and neutrons in the nucleus. Like relative atomic mass values, relative isotopic mass values are ratios with no units. We can find the relative atomic mass of a sample of an element by working out the abundance-weighted mean of the relative isotopic masses.:17 For example, if a sample of thallium is made up of 30% thallium-203 and 70% thallium-205, Two samples of an element that consists of more than one isotope, collected from two widely spaced sources on Earth, are expected to have slightly different relative atomic masses. This is because the proportions of each isotope are slightly different at different locations. A standard atomic weight is the mean value of relative atomic masses of a number of normal samples of the element. Standard atomic weight values are published at regular intervals by the Commission on Isotopic Abundances and Atomic Weights of the International Union of Pure and Applied Chemistry (IUPAC). The standard atomic weight for each element is on the periodic table. Often, the term relative atomic mass is used to mean standard atomic weight. This is not quite correct, because relative atomic mass is a less specific term that refers to individual samples. Individual samples of an element could have a relative atomic mass different to the standard atomic weight for the element. For example, a sample from another planet could have a relative atomic mass very different to the standard Earth-based value. Relative atomic mass is not the same as: - atomic mass (symbol: ma), which is the mass of a single atom, commonly expressed in unified atomic mass units - mass number (symbol: A), which is the sum of the number of protons and the number of neutrons in the nucleus of an atom - atomic number (symbol: Z), which is the number of protons in the nucleus of an atom. References[change | change source] - "Atomic weight: The Name, its History, Definition, and Units". Commission on Isotopic Abundances and Atomic Weights of the International Union of Pure and Applied Chemistry. https://web.archive.org/web/20131215231731/http://www.ciaaw.org/atomic_weights2.htm. Retrieved 2016-01-07. - Daintith, John, ed. (2008). A Dictionary of Chemistry (Sixth ed.). Oxford University Press. p. 457. ISBN 978-0-19-920463-2. - Salters Advanced Chemistry: Chemical Ideas (Third ed.). Heinemann. 2008. ISBN 978-0-435631-49-9. - Moore, John T. (2010). Chemistry Essentials For Dummies. Wiley. p. 44. ISBN 978-0-470-61836-3.
https://simple.wikipedia.org/wiki/Relative_atomic_mass
4.46875
Long Vowel Teacher Resources Find Long Vowel educational ideas and activities Showing 1 - 20 of 891 resources Phonics Instruction: Long Vowel Sound, Silent E Students explore language arts by participating in a class word identification game. In this phonics lesson, students read several words in class and identify the different sounds between short and long vowel words. Students complete a... 2nd - 4th Visual & Performing Arts We Flew With a Baboon More vowels? Elementary schoolers recognize how vowel patterns change a short vowel sound into a long vowel sound. With an emphasis on the /oo/ that makes the long U sound, kids identify the phoneme and letter combination through... 1st - 2nd English Language Arts CCSS: Adaptable Practicing Short and Long Vowel Sounds What are the differences between short and long vowel sounds? The class participates in a teacher ledlesson plan in which they add letters to words as they evolve from a three letter, short vowel word to a longer long vowel word. They... 2nd English Language Arts CCSS: Adaptable
http://www.lessonplanet.com/lesson-plans/long-vowel
4
The largest single-dish radio telescope in the world. It came into operation in 1963 and is operated by Cornell University for the National Science Foundation. Occupying a large karst sinkhole in the hills south of Arecibo in Puerto Rico, its area of almost 9 hectares is greater than that of all other such instruments in the world combined. The surface of Arecibo's 305-meter (1,000-foot) fixed, spherical dish is made from almost 40,000 perforated aluminum panels, each measuring 1 meter by 2 meters (3 feet by 6 feet), supported by a network of steel cables strung across the underlying depression. Suspended 150 meters (450 feet) above the reflector is a 900-ton platform which houses the receiving equipment. Although the telescope is not steerable, some directionality is obtained by moving the feed antenna (upgraded in 1996). The immense size and accurate configuration of the dish allows extremely faint signals to be detected. For this reason, it has been used extensively in SETI investigations and in the first attempt at CETI (see Arecibo Message). It also featured in the film Contact. The giant radio telescope dish at the Arecibo Observatory in Puerto Rico is nestled in a natural sinkhole. The Search for Extraterrestrial Intelligence (SETI) has been using the telescope to search for radio signals from space since 1992. The Arecibo radio telescope is located in Puerto Rico about 10 km south of the town of Arecibo, which is located on the north coast of the island. It is operated by Cornell University under cooperative agreement with the National Science Foundation. Arecibo is one of the most famous such telescopes in the world, distinguished by its enormous size; the main collecting dish is 305 meters in diameter, constructed inside the depression left by a karst sinkhole. It is the largest curved focusing dish on Earth, giving it the largest photon-gathering capacity. Arecibo's dish surface is made of 38,778 perforated aluminum panels, each measuring about 3 feet by 6 feet, supported by a mesh of steel cables. It is a spherical reflector (as opposed to a parabolic reflector). This form is due to the method used to aim the telescope; Arecibo's dish is fixed in place, but the receiver at its focal point is repositioned to intercept signals reflected from different directions by the spherical dish surface. The receiver is located on a 900-ton platform which is suspended 450 feet in the air above the dish by 18 cables running from three reinforced concrete towers, one of which is 365 feet high and the other two of which are 265 feet high (the tops of the three towers are at the same elevation). The platform has a 93 meter long rotating bow-shaped track called the azimuth arm on which receiving antennae, secondary and tertiary reflectors are mounted. This allows the telescope to observe any region of the sky within a forty degree cone of visibility about the local zenith (between -1 and 38 degrees of declination). Puerto Rico's location near the equator allows Arecibo to view all of the planets in the solar system. The construction of Arecibo was initiated by Professor William E. Gordon of Cornell University, who originally intended to use it for the study of Earth's ionosphere. Originally, a fixed parabolic reflector was envisioned, pointing in a fixed direction with a 500 foot tower to hold equipment at the focus. This design would have had a very limited use for other potential areas of research, such as planetology and radio astronomy, which require the ability to point at different positions in the sky and to track those positions for an extended period as Earth rotates. Ward Low of ARPA pointed out this flaw, and put Gordon in touch with the Air Force Cambridge Research Laboratory (AFCRL) in Boston, Massachusetts where a group headed by Phil Blacksmith was working on spherical reflectors and another group was studying the propagation of radio waves in and through the upper atmosphere. Cornell University proposed the project to ARPA in the summer of 1958 and a contract was signed between the AFCRL and the University in November of 1959. Construction began in the summer of 1960, with the official opening taking place on November 1, 1963. Arecibo has been instrumental in many significant scientific discoveries. On April 7 1964, shortly after its inauguration, Gordon H. Pettengill's team used Arecibo to determine that the rotation rate of Mercury was not 88 days, as previously thought, but only 59 days. Arecibo also had military intelligence uses, for example locating Soviet radar installations by detecting their signals bouncing back off of the Moon. Arecibo has undergone several significant upgrades over its lifespan.
http://structurae.net/structures/arecibo-radio-telescope
4.125
One of the smallest dinosaur skulls ever discovered has been identified and described by a team of scientists from London, Cambridge and Chicago. The skull would have been only 45 millimeters (less than two inches) in length. It belonged to a very young Heterodontosaurus, an early dinosaur. This juvenile weighed about 200 grams, less than two sticks of butter. In the Fall issue of the Journal of Vertebrate Paleontology, the researchers describe important findings from this skull that suggest how and when the ornithischians, the family of herbivorous dinosaurs that includes Heterodontosaurus, made the transition from eating meat to eating plants. "It's likely that all dinosaurs evolved from carnivorous ancestors," said study co-author Laura Porro, a post-doctoral student at the University of Chicago. "Since heterodontosaurs are among the earliest dinosaurs adapted to eating plants, they may represent a transition phase between meat-eating ancestors and more sophisticated, fully-herbivorous descendents." The teeth suggest Heterodontosaurus practiced occasional omnivory: the canines were used for defense or for adding small animals such as insects to a diet composed mainly of plants. Credit: Natural History Museum "This juvenile skull," she added, "indicates that these dinosaurs were still in the midst of that transition." Heterodontosaurus lived during the Early Jurassic period (about 190 million years ago) of South Africa. Adult Heterodontosaurs were turkey-sized animals, reaching just over three feet in length and weighing around five to six pounds. Because their fossils are very rare, Heterodontosaurus and its relatives (the heterodontosaurs) are poorly understood compared to later and larger groups of dinosaurs. "There were only two known fossils of Heterodontosaurus, both in South Africa and both adults," said Porro, who is completing her doctoral dissertation on feeding in Heterodontosaurus under the supervision of David Norman, researcher at the University of Cambridge and co-author of the study. "There were rumors of a juvenile heterodontosaur skull in the collection of the South African Museum," she said, "but no one had ever described it." Study co-author Laura Porro, a post-doctoral student at the University of Chicago, in the lab with a model skull from a full sized Heterodontosaurus. Credit: University of Chicago Medical Center As part of her research, Porro visited the Iziko South African Museum, Cape Town, to examine the adult fossils. When she was there, she got permission to "poke around" in the Museum's collections. While going through drawers of material found during excavations in the 1960s, she found two more heterodontosaur fossils, including the partial juvenile skull. "I didn't recognize it as a dinosaur at first," she said, "but when I turned it over and saw the eye looking straight at me, I knew exactly what it was." "This discovery is important because for the first time we can examine how Heterodontosaurus changed as it grew," said the study's lead author, Richard Butler of the Natural History Museum, in London. "The juvenile Heterodontosaurus had relatively large eyes and a short snout when compared to an adult," he said, "similar to the differences we see between puppies and fully-grown dogs." A specialist on the mechanics of feeding, Porro was particularly interested in the new fossil's teeth. Heterodontosaurs, which means "different-toothed lizards," have an unusual combination of teeth, with large fang-like canines at the front of their jaws and worn, molar-like grinding teeth at the back. In contrast, most reptiles have teeth which change little in shape along the length of the jaw. This bizarre suite of teeth has led to debate over what heterodontosaurs ate. Some scientists think heterodontosaurs were omnivores who used their differently-shaped teeth to eat both plants and small animals. Others contend that heterodontosaurs were herbivores who ate only plants and that the canines were sexually dimorphic--present only in males, as in living warthogs. In that scenario, the canines could have been used as weapons by rival males in disputes over mates and territories. Porro and colleagues found that the juvenile already had a fully-developed set of canines. "The fact that canines are present at such an early stage of growth strongly suggests that this is not a sexually dimorphic character because such characters tend to appear later in life," said Butler. Instead, the researchers suspect that the canines were used as defensive weapons against predators, or for adding occasional small animals such as insects, small mammals and reptiles to a diet composed mainly of plants--what the authors refers to as "occasional omnivory." The study created a new mystery, however. With the aid of X-rays and CT scans, Porro found a complete lack of replacement teeth in the adult and juvenile skulls. Most reptiles, including living crocodiles and lizards, replace their teeth constantly throughout their lives, so that sharp, unworn teeth are always available. The same was true for dinosaurs. Most mammals, on the other hand, replace their teeth only once during their lives, allowing the upper and lower teeth to develop a tight, precise fit. Heterodontosaurus was more similar to mammals, not only in the specialized, variable shape of its teeth but also in replacing its teeth slowly, if at all, and developing tight tooth-to-tooth contact. "Tooth replacement must have occurred during growth," the authors conclude, "however, evidence of continuous tooth replacement appears to be absent, in both adult and juvenile specimens." The research was funded by the Royal Society, Cambridge University and the Gates Cambridge Trust. - PHYSICAL SCIENCES - EARTH SCIENCES - LIFE SCIENCES - SOCIAL SCIENCES Subscribe to the newsletter Stay in touch with the scientific world! Know Science And Want To Write? - Pseudoscience Creeping Into Your Conference? A Case In GMOs And Glyphosate - My Thoughts On The LIGO-VIRGO Result - Henri Poincaré Predicted The Existence Of Gravitational Waves As Early As June 5, 1905 - Are Dark Matter Scientists About To Prove Its Existence? - Gravitational Waves? Watch the LIGO press conference at 10:30 Eastern. - Beyond Diamonds And Gems: The World's Rarest Minerals - The power of GW150914 - "Hi Dan, I referrred to what described in https://www.ligo.caltech.edu/page/ligos-ifo . From that..." - "It's to be remembered that the Irish had small kingdoms in Wales and Cornwall as well as Scotland..." - "Thank you for this comment.Raymond Poincaré (1860-1934) was the first cousin of Henri Poincaré..." - "I like the argument on how evolving the entertainment technology are, the human being still enjoy..." - "The power flux (power per unit area) of plane gravitational waves can be derived from Einstein's..." - Fluorine: The Element From Hell - Beard Microbiology: Grubby Hipsters May Be On To Something - Water Tops the List of Health Concerns for Competitive Eaters - Natural Flavors Are More Radioactive Than Artificial Ones. - Bariatric Surgery Beneficial Even for Older People - Opiates No Better at Easing Knee Osteoarthritis Pain - Speech disorder called apraxia can progress to neurodegenerative disease - Market integration could help offset climate-related food insecurity - What values are important to scientists? - Drones for research: DePaul University archaeologist to explain UAV use at Fifa - Does living near an oil or natural gas well affect your drinking water?
http://www.science20.com/news_releases/heterodontosaurus_tiny_two_inch_dinosaur_has_big_insight_evolution_plant_eaters
4.1875
Swedish emigration to the United States During the Swedish emigration to the United States in the 19th and early 20th centuries, about 1.3 million Swedes left Sweden for the United States. The main pull was the availability of low cost, high quality farm land in the upper Midwest (the area from Illinois to Montana), and high paying jobs in mechanical industries and factories in Chicago, Minneapolis, Worcester and many smaller cities. Religious freedom was also a pull factor for some. Most migration was of the chain form, with early settlers giving reports and recommendations (and travel money) to relatives and friends in Sweden, who followed the same route to new homes. A major push factor inside Sweden was population growth and the growing shortage of good farm lands. Additional factors in the earliest stages of emigration included crop failures, the lack of industrial jobs in urban Sweden, and for some the wish to escape the authority of an established state church. After 1870, transatlantic fares were cheap. By the 1880s, American railroads had agents in Sweden who offered package deals on one-way tickets for entire families. The railroad would ship the family, their house furnishings and farm tools, and provide a financial deal to spread out payments for the farm over a period of years. Swedish migration peaked 1870-1900. By 1890, the U.S. census reported a Swedish-American population of nearly 800,000. Many of the immigrants became classic pioneers, clearing and cultivating the prairies of the Great Plains, while others remained in the cities, particularly Chicago. Single young women usually went straight from agricultural work in the Swedish countryside to jobs as housemaids. Many established Swedish Americans visited the old country in the later 19th century, their narratives illustrating the difference in customs and manners. Some made the journey with the intention of spending their declining years in Sweden. After a dip in the 1890s, emigration rose again, causing national alarm in Sweden. At this time, Sweden's economy had developed substantially, but the higher wages prevailing in the United States retained their attractiveness. A broad-based parliamentary emigration commission was instituted in 1907. It recommended social and economic reform in order to reduce emigration by "bringing the best sides of America to Sweden". The commission's major proposals were rapidly implemented: universal male suffrage, better housing, general economic development, and broader popular education, measures which also can be attributed to numerous other factors. The effect of these measures on migration is hard to assess, as World War I (1914–1918) broke out the year after the commission published its last volume, reducing emigration to a mere trickle. From the mid-1920s, there was no longer a Swedish mass emigration. - 1 Early history: the Swedish-American dream - 2 19th century - 3 20th century - 4 Swedish Americans - 5 See also - 6 Notes - 7 References - 8 External links Early history: the Swedish-American dream The Swedish West India Company established a colony on the Delaware River in 1638, naming it New Sweden. A small, short-lived colonial settlement, New Sweden contained at its height only some 600 Swedish and Finnish settlers (Finland being part of Sweden). It was lost to the Dutch in New Netherland in 1655. Nevertheless, the descendants of the original colonists maintained spoken Swedish until the late 18th century. Modern day reminders of the history of New Sweden are reflected in the presence of the American Swedish Historical Museum in Philadelphia, Fort Christina State Park in Wilmington, Delaware, and The Printzhof in Essington, Pennsylvania. The historian H. A. Barton has suggested that the greatest significance of New Sweden was the strong and long-lasting interest in America that the colony generated in Sweden. America was seen as the standard-bearer of liberalism and personal freedom, and became an ideal for liberal Swedes. Their admiration for America was combined with the notion of a past Swedish Golden Age with ancient Nordic ideals. Supposedly corrupted by foreign influences, the timeless "Swedish values" would be recovered by Swedes in the New World. This remained a fundamental theme of Swedish, and later Swedish-American, discussion of America, though the recommended "timeless" values changed over time. In the 17th and 18th centuries, Swedes who called for greater religious freedom would often refer to America as the supreme symbol of it. The emphasis shifted from religion to politics in the 19th century, when liberal citizens of the hierarchic Swedish class society looked with admiration to the American Republicanism and civil rights. In the early 20th century, the Swedish-American dream even embraced the idea of a welfare state responsible for the well-being of all its citizens. Underneath these shifting ideas ran from the start the current which carried all before it in the later 20th century: America as the symbol and dream of unfettered individualism. Swedish debate about America remained mostly theoretical before the 19th century, since very few Swedes had any personal experience of the nation. Emigration was illegal and population was seen as the wealth of nations. However, the Swedish population doubled between 1750 and 1850, and as population growth outstripped economic development, it gave rise to fears of overpopulation based on the influential population theory of Thomas Malthus. In the 1830s, the laws against emigration were repealed. Akenson argues that hard times in Sweden before 1867 produced a strong push effect, but that for cultural reasons most Swedes refused to emigrate and clung on at home. Akenson says the state wanted to keep its population high and: - The upper classes' need for a cheap and plentiful labor force, the instinctive willingness of the clergy of the state church to discourage emigration on both moral and social grounds, and the deference of the lower orders to the arcade of powers that hovered above them—all these things formed an architecture of cultural hesitancy concerning emigration. A few "countercultural" deviants from the mainstream did leave and showed the way. The severe economic hardship of the "Great Deprivation" of 1867 to 1869, finally overcame the reluctance and the floodgates opened to produce an "emigration culture"" European mass emigration: push and pull Large-scale European emigration to the United States started in the 1840s in Britain, Ireland and Germany. That was followed by a rising wave after 1850 from most Northern European countries, and in turn by Central and Southern Europe. Research into the forces behind this European mass emigration has relied on sophisticated statistical methods. One theory which has gained wide acceptance is Jerome's analysis in 1926 of the "push and pull" factors—the impulses to emigration generated by conditions in Europe and the U.S. respectively. Jerome found that fluctuations in emigration co-varied more with economic developments in the U.S. than in Europe, and deduced that the pull was stronger than the push. Jerome's conclusions have been challenged, but still form the basis of much work on the subject. Emigration patterns in the Nordic countries—Finland, Sweden, Norway, Denmark, and Iceland—show striking variation. Nordic mass emigration started in Norway, which also retained the highest rate throughout the century. Sweden got underway in the early 1840s, and had the third-highest rate in all of Europe, after Ireland and Norway. Denmark had a consistently low rate of emigration, while Iceland had a late start but soon reached levels comparable to Norway. Finland, whose mass emigration did not start until the late 1880s, and at the time part of the Russian Empire, is usually classified as part of the Eastern European wave. Crossing the Atlantic The first European emigrants travelled in the holds of sailing cargo ships. With the advent of the age of steam, an efficient transatlantic passenger transport mechanism was established at the end of the 1860s. It was based on huge ocean liners run by international shipping lines, most prominently Cunard, White Star, and Inman. The speed and capacity of the large steamships meant that tickets became cheaper. From the Swedish port towns of Stockholm, Malmö and Gothenburg, transport companies operated various routes, some of them with complex early stages and consequently a long and trying journey on the road and at sea. Thus North German transport agencies relied on the regular Stockholm—Lübeck steamship service to bring Swedish emigrants to Lübeck, and from there on German train services to take them to Hamburg or Bremen. There they would board ships to the British ports of Southampton and Liverpool and change to one of the great transatlantic liners bound for New York. The majority of Swedish emigrants, however, travelled from Gothenburg to Hull, UK, on dedicated boats run by the Wilson Line, then by train across Britain to Liverpool and the big ships. During the later 19th century, the major shipping lines financed Swedish emigrant agents and paid for the production of large quantities of emigration propaganda. Much of this promotional material, such as leaflets, was produced by immigration promoters in the U.S. Propaganda and advertising by shipping line agents was often blamed for emigration by the conservative Swedish ruling class, which grew increasingly alarmed at seeing the agricultural labor force leave the country. It was a Swedish 19th-century cliché to blame the falling ticket prices and the pro-emigration propaganda of the transport system for the craze of emigration, but modern historians have varying views about the real importance of such factors. Brattne and Åkerman have examined the advertising campaigns and the ticket prices as a possible third force between push and pull. They conclude that neither advertisements nor pricing had any decisive influence on Swedish emigration. While the companies remain unwilling, as of 2007, to open their archives to researchers, the limited sources available suggest that ticket prices did drop in the 1880s, but remained on average artificially high because of cartels and price-fixing. On the other hand, H. A. Barton states that the cost of crossing the Atlantic dropped drastically between 1865 and 1890, encouraging poorer Swedes to emigrate. The research of Brattne and Åkerman has shown that the leaflets sent out by the shipping line agents to prospective emigrants would not so much celebrate conditions in the New World, as simply emphasize the comforts and advantages of the particular company. Descriptions of life in America were unvarnished, and the general advice to emigrants brief and factual. Newspaper advertising, while very common, tended to be repetitive and stereotyped in content. Swedish mass migration took off in the spring of 1841 with the departure of Uppsala University graduate Gustaf Unonius (1810–1902) together with his wife, a maid, and two students. This small group founded a settlement they named New Upsala in Waukesha County, Wisconsin, and began to clear the wilderness, full of enthusiasm for frontier life in "one of the most beautiful valleys the world can offer". After moving to Chicago, Unonius soon became disillusioned with life in the U.S., but his reports in praise of the simple and virtuous pioneer life, published in the liberal newspaper Aftonbladet, had already begun to draw Swedes westward. The rising Swedish exodus was caused by economic, political, and religious conditions affecting particularly the rural population. Europe was in the grip of an economic depression. In Sweden, population growth and repeated crop failures were making it increasingly difficult to make a living from the tiny land plots on which at least three quarters of the inhabitants depended. Rural conditions were especially bleak in the stony and unforgiving Småland province, which became the heartland of emigration. The American Midwest was an agricultural antipode to Småland, for it, Unonius reported in 1842, "more closely than any other country in the world approaches the ideal which nature seems to have intended for the happiness and comfort of humanity." Prairie land in the Midwest was ample, loamy, and government-owned. From 1841 it was sold to squatters for $1.25 per acre, ($29 per acre ($72/ha) as of 2016), following the Preemption Act of 1841 (later replaced by the Homestead Act). The inexpensive and fertile land of Illinois, Iowa, Minnesota and Wisconsin was irresistible to landless and impoverished European peasants. It also attracted more well-established farmers. The political freedom of the American republic exerted a similar pull. Swedish peasants were some of the most literate in Europe, and consequently had access to the European egalitarian and radical ideas that culminated in the Revolutions of 1848. The clash between Swedish liberalism and a repressive monarchist regime raised political awareness among the disadvantaged, many of whom looked to the U.S. to realize their republican ideals. Dissenting religious practitioners also widely resented the treatment they received from the Lutheran State Church through the Conventicle Act. Conflicts between local worshipers and the new churches were most explosive in the countryside, where dissenting pietist groups were more active, and were more directly under the eye of local law enforcement and the parish priest. Before non-Lutheran churches were granted toleration in 1809, clampdowns on illegal forms of worship and teaching often provoked whole groups of pietists to leave together, intent on forming their own spiritual communities in the new land. The largest contingent of such dissenters, 1,500 followers of Eric Jansson, left in the late 1840s and founded a community in Bishop Hill, Illinois. The first Swedish emigrant guidebook was published as early as 1841, the year Unonius left, and nine handbooks were published between 1849 and 1855. Substantial groups of lumberjacks and iron miners were recruited directly by company agents in Sweden. Agents recruiting construction builders for American railroads also appeared, the first in 1854, scouting for the Illinois Central Railroad. The Swedish establishment disapproved intensely of emigration. Seen as depleting the labor force and as a defiant act among the lower orders, emigration alarmed both the spiritual and the secular authorities. Many emigrant diaries and memoirs feature an emblematic early scene in which the local clergy warns travellers against risking their souls among foreign heretics. The conservative press described emigrants as lacking in patriotism and moral fibre: "No workers are more lazy, immoral and indifferent than those who immigrate to other places." Emigration was denounced as an unreasoning "mania" or "craze", implanted in an ignorant populace by "outside agents". The liberal press retorted that the "lackeys of monarchism" failed to take into account the miserable conditions in the Swedish countryside and the backwardness of Swedish economic and political institutions. "Yes, emigration is indeed a 'mania'", wrote the liberal Göteborgs Handels- och Sjöfartstidning sarcastically, "The mania of wanting to eat one's fill after one has worked oneself hungry! The craze of wanting to support oneself and one's family in an honest manner!" The great Famine of 1866–68, and the distrust and discontent concerning the way the establishment distributed the relief help, is estimated to have contributed greatly to the raising Swedish emigration to the United States. Late 19th century Swedish emigration to the United States reached its height in the 1870-1900 era. The size of the Swedish-American community in 1865 is estimated at 25,000 people, a figure soon to be surpassed by the yearly Swedish immigration. By 1890 the U.S. census reported a Swedish-American population of nearly 800,000, with immigration peaking in 1869 and again in 1887. Most of this influx settled in the North. The great majority of them had been peasants in the old country, pushed away from Sweden by disastrous crop failures and pulled towards America by the cheap land resulting from the 1862 Homestead Act. Most immigrants became pioneers, clearing and cultivating the virgin land of the Midwest and extending the pre-Civil War settlements further west, into Kansas and Nebraska. Once sizable Swedish farming communities had formed on the prairie, the greatest impetus for further peasant migration came through personal contacts. The iconic "America-letter" to relatives and friends at home spoke directly from a position of trust and shared background, carrying immediate conviction. At the height of migration, familial America-letters could lead to chain reactions which would all but depopulate some Swedish parishes, dissolving tightly knit communities which then re-assembled in the Midwest. Other forces worked to push the new immigrants towards the cities, particularly Chicago. According to historian H. Arnold Barton, the cost of crossing the Atlantic dropped by more than half between 1865 and 1890, which led to progressively poorer Swedes contributing a growing share of immigration (but compare Brattne and Åkerman, see "Crossing the Atlantic" above). The new immigrants were increasingly younger and unmarried. With the shift from family to individual immigration came a faster and fuller Americanization, as young, single individuals with little money took whatever jobs they could get, often in cities. Large numbers even of those who had been farmers in the old country made straight for American cities and towns, living and working there at least until they had saved enough capital to marry and buy farms of their own. A growing proportion stayed in urban centers, combining emigration with the flight from the countryside which was happening in the homeland and all across Europe. Single young women, a group Barton considers particularly significant, most commonly moved straight from field work in rural Sweden to jobs as live-in housemaids in urban America. "Literature and tradition have preserved the often tragic image of the pioneer immigrant wife and mother", writes Barton, "bearing her burden of hardship, deprivation and longing on the untamed frontier ... More characteristic among the newer arrivals, however, was the young, unmarried woman ... As domestic servants in America, they ... were treated as members of the families they worked for and like 'ladies' by American men, who showed them a courtesy and consideration to which they were quite unaccustomed at home." They found employment easily, as Scandinavian maids were in high demand, and learned the language and customs quickly. In contrast, newly arrived Swedish men were often employed in all-Swedish work gangs. The young women usually married Swedish men, and brought with them in marriage an enthusiasm for ladylike, American manners and middle-class refinements. Many admiring remarks are recorded from the late 19th century about the sophistication and elegance that simple Swedish farm girls would gain in a few years, and about their unmistakably American demeanor. As ready workers, the Swedes were generally welcomed by the Americans, who often singled them out as the "best" immigrants. There was no significant anti-Swedish nativism of the sort that attacked Irish, German and, especially, Chinese newcomers. The Swedish style was more familiar: "They are not peddlers, nor organ grinders, nor beggars; they do not sell ready-made clothing nor keep pawn shops", wrote the Congregational missionary M. W. Montgomery in 1885; "they do not seek the shelter of the American flag merely to introduce and foster among us ... socialism, nihilism, communism ... they are more like Americans than are any other foreign peoples." A number of well-established and longtime Swedish Americans visited Sweden in the 1870s, making comments that give historians a window on the cultural contrasts involved. A group from Chicago made the journey in an effort to remigrate and spend their later years in the country of their birth, but changed their minds when faced with the realities of 19th-century Swedish society. Uncomfortable with what they described as the social snobbery, pervasive drunkenness, and superficial religious life of the old country, they returned promptly to America. The most notable visitor was Hans Mattson (1832–1893), an early Minnesota settler who had served as a colonel in the Union Army and had been Minnesota's secretary of state. He visited Sweden in 1868–69 to recruit settlers on behalf of the Minnesota Immigration Board, and again in the 1870s to recruit for the Northern Pacific Railroad. Viewing Swedish class snobbery with indignation, Mattson wrote in his Reminiscences that this contrast was the key to the greatness of America, where "labor is respected, while in most other countries it is looked down upon with slight". He was sardonically amused by the ancient pageantry of monarchy at the ceremonial opening of the Riksdag: "With all respects for old Swedish customs and manners, I cannot but compare this pageant to a great American circus—minus the menagerie, of course." Mattson's first recruiting visit came immediately after consecutive seasons of crop failure in 1867 and 1868, and he found himself "besieged by people who wished to accompany me back to America." He noted that: the laboring and middle classes already at that time had a pretty correct idea of America, and the fate that awaited emigrants there; but the ignorance, prejudice and hatred toward America and everything pertaining to it among the aristocracy, and especially the office holders, was as unpardonable as it was ridiculous. It was claimed by them that all was humbug in America, that it was the paradise of scoundrels, cheats, and rascals, and that nothing good could possibly come out of it. A more recent American immigrant, Ernst Skarstedt, who visited Sweden in 1885, received the same galling impression of upper-class arrogance and anti-Americanism. The laboring classes, in their turn, appeared to him coarse and degraded, drinking heavily in public, speaking in a stream of curses, making obscene jokes in front of women and children. Skarstedt felt surrounded by "arrogance on one side and obsequiousness on the other, a manifest scorn for menial labor, a desire to appear to be more than one was". This traveller too was incessantly hearing American civilization and culture denigrated from the depths of upper-class Swedish prejudice: "If I, in all modesty, told something about America, it could happen that in reply I was informed that this could not possibly be so or that the matter was better understood in Sweden." Swedish emigration dropped dramatically after 1890; return migration rose as conditions in Sweden improved. Sweden underwent a rapid industrialization within a few years in the 1890s, and wages rose, principally in the fields of mining, forestry, and agriculture. The pull from the U.S. declined even more sharply than the Swedish "push", as the best farmland was taken. No longer growing but instead settling and consolidating, the Swedish-American community seemed set to become ever more American and less Swedish. The new century, however, saw a new influx. Parliamentary Emigration Commission 1907–1913 Emigration rose again at the turn of the 20th century, reaching a new peak of about 35,000 Swedes in 1903. Figures remained high until World War I, alarming both conservative Swedes, who saw emigration as a challenge to national solidarity, and liberals, who feared the disappearance of the labor force necessary for economic development. One-fourth of all Swedes had made the United States their home, and a broad national consensus mandated that a Parliamentary Emigration Commission study the problem in 1907. Approaching the task with what Barton calls "characteristic Swedish thoroughness", the Commission published its findings and proposals in 21 large volumes. The Commission rejected conservative proposals for legal restrictions on emigration and in the end supported the liberal line of "bringing the best sides of America to Sweden" through social and economic reform. Topping the list of urgent reforms were universal male suffrage, better housing, and general economic development. The Commission especially hoped that broader popular education would counteract "class and caste differences" Class inequality in Swedish society was a strong and recurring theme in the Commission's findings. It appeared as a major motivator in the 289 personal narratives included in the report. These documents, of great research value and human interest today, were submitted by Swedes in Canada and the U.S. in response to requests in Swedish-American newspapers. The great majority of replies expressed enthusiasm for their new homeland and criticized conditions in Sweden. Bitter experiences of Swedish class snobbery still rankled after sometimes 40–50 years in America. Writers recalled the hard work, pitiful wages, and grim poverty of life in the Swedish countryside. One woman wrote from North Dakota of how in her Värmland home parish, she had had to earn her living in peasant households from the age of eight, starting work at four in the morning and living on "rotten herring and potatoes, served out in small amounts so that I would not eat myself sick". She could see "no hope of saving anything in case of illness", but rather could see "the poorhouse waiting for me in the distance". When she was seventeen, her emigrated brothers sent her a prepaid ticket to America, and "the hour of freedom struck" A year after the Commission published its last volume, World War I began and reduced emigration to a mere trickle. From the 1920s, there was no longer a Swedish mass emigration. The influence of the ambitious Emigration Commission in solving the problem is still a matter of debate. Franklin D. Scott has argued in an influential essay that the American Immigration Act of 1924 was the effective cause. Barton, by contrast, points to the rapid implementation of essentially all the Commission's recommendations, from industrialization to an array of social reforms. He maintains that its findings "must have had a powerful cumulative effect upon Sweden's leadership and broader public opinion". The Midwest remained the heartland of the Swedish-American community, but its position weakened in the 20th century: in 1910, 54% of the Swedish immigrants and their children lived in the Midwest, 15% in industrial areas in the East, and 10% on the West Coast. Chicago was effectively the Swedish-American capital, accommodating about 10% of all Swedish Americans—more than 100,000 people—making it the second-largest Swedish city in the world (only Stockholm had more Swedish inhabitants). Defining themselves as both Swedish and American, the Swedish-American community retained a fascination for the old country and their relationship to it. The nostalgic visits to Sweden which had begun in the 1870s continued well into the 20th century, and narratives from these trips formed a staple of the lively Swedish-American publishing companies. The accounts testify to complex feelings, but each contingent of American travellers were freshly indignant at Swedish class pride and Swedish disrespect for women. It was with renewed pride in American culture that they returned to the Midwest. In the 2000 U.S. Census, about four million Americans claimed to have Swedish roots. Minnesota remains by a wide margin the state with the most inhabitants of Swedish descent—9.6% of the population as of 2005. The best-known artistic representation of the Swedish mass migration is the epic four-novel suite The Emigrants (1949–1959) by Vilhelm Moberg (1898–1973). Portraying the lives of an emigrant family through several generations, the novels have sold nearly two million copies in Sweden and have been translated into more than twenty languages. The tetralogy has been filmed by Jan Troell as The Emigrants (1971) and The New Land (1972), and forms the basis of Kristina from Duvemåla, a 1995 musical by former ABBA members Benny Andersson and Björn Ulvaeus. In Sweden, the Småland city of Växjö is home to the Swedish Emigrant Institute (Svenska Emigrantinstitutet), founded in 1965 "to preserve records, interviews, and memorabilia relating to the period of major Swedish emigration between 1846 and 1930". The House of the Emigrants (Emigranternas Hus) was founded in Gothenburg, the main port for Swedish emigrants, in 2004. The centre shows exhibitions on migration and has a research hall for genealogy. In the U.S., there are hundreds of active Swedish-American organizations as of 2007, for which the Swedish Council of America functions as an umbrella group. There are Swedish-American museums in Philadelphia, Chicago, Minneapolis, and Seattle. Rural cemeteries such as the Moline Swedish Lutheran Cemetery in central Texas also serve as a valuable record of the first Swedish people to come to America. - Nordstjernan (newspaper) - American Swedish Historical Museum - American Swedish Institute - Swedish colonization of the Americas - Swedish language in the United States - Swedish-American relations - Barton, A Folk Divided, 5–7. - Kälvemark, 94–96. - See Beijbom, "Review". - Barton, A Folk Divided, 11. - Donald Harman Akenson, Ireland, Sweden, and the Great European Migration, 1815-1914 (McGill-Queen's University Press; 2011) p 70 - The pictures originally illustrated a cautionary tale published in 1869 in the Swedish periodical Läsning för folket, the organ of the Society for the Propagation of Useful Knowledge (Sällskapet för nyttiga kunskapers spridande). See Barton, A Folk Divided, 71. - Akenson, Ireland, Sweden, and the Great European Migration, 1815-1914 pp 37-39 - Åkerman, passim. - Norman, 150–153. - Runblom and Norman, 315. - Norman, passim. - Brattne and Åkerman, 179–181. - Brattne and Åkerman, 179–181, 186–189, 199–200. - Barton, 38. - Brattne and Åkerman, 187–192. - Unonius, quoted in Barton, A Folk Divided, 13. - Quoted in Barton, A Folk Divided, 14. - Cipollo, 115, estimates adult literacy in Sweden at 90% in 1850, which places it highest among the European countries he has surveyed. - Gritsch, Eric W. A History of Lutheranism. Minneapolis: Fortress Press, 2002. p. 180. - Barton, A Folk Divided, 15–16. - Barton, A Folk Divided, 17. - Barton, A Folk Divided, 18. - Proclaimed in an article in the newspaper Nya Wermlandstidningen in April 1855; quoted by Barton, A Folk Divided, 20–22. - Göteborgs Handels- och Sjöfartstidning, 1849, quoted in Barton, A Folk Divided, 24. - 1851, quoted and translated by Barton, A Folk Divided, 24. - Häger, Olle; Torell, Carl; Villius, Hans (1978). Ett satans år: Norrland 1867. Stockholm: Sveriges Radio. Libris 8358120. ISBN 91-522-1529-6 (inb.) - The exact figure is 776,093 people (Barton, A Folk Divided, 37). - 1867 and 1868 were the worst years for crop failure, which ruined many smallholders; see Barton, A Folk Divided, 37. - Swenson Center. - Beijbom, "Chicago" - Barton, A Folk Divided, 38–41. - Barton, A Folk Divided, 41. - Quoted by Barton, A Folk Divided, 40. - Private letters by Anders Larsson in the 1870s, summarized by Barton, A Folk Divided, 59. - Quoted by Barton, A Folk Divided, 60–61. - Barton, A Folk Divided, 61–62. - Svensk-amerikanska folket i helg och söcken (Ernst Teofil Skarstedt. Stockholm: Björck & Börjesson. 1917) - Barton, A Folk Divided, 80. - 1.4 million first- and second-generation Swedish immigrants lived in the U.S. in 1910, while Sweden's population at the time was 5.5 million; see Beijbom, "Review". - Barton, A Folk Divided, 149. - The phrase is from Ernst Beckman's original liberal parliamentary motion for instituting the Commission; quoted by Barton, A Folk Divided, 149. - Quoted from Volume VII of the Survey by Barton, A Folk Divided, 152. - Barton, A Folk Divided,165. - For Swedish American publishing, see Barton, A Folk Divided, 212–213, 254. - Barton, A Folk Divided, 103 ff. - American FactFinder, Fact Sheet "Swedish". - American FactFinder: Minnesota, Selected Social Characteristics in the United States, 2005. - Moberg biography by JoAnn Hanson-Stone at the Swedish Emigrant Institute. - "The Swedish Emigrant Institute". UtvandrarnasHus.se. Svenska Emigrantinstitutet. Archived from the original on October 5, 2013. - House of the Emigrants. - Scott, Larry E. "Swedish Texans". University of Texas Institute of Texan Cultures at San Antonio, 2006. - Akenson, Donald Harman. (2011) Ireland, Sweden and the Great European Migration, 1815-1914 (McGill-Queens University Press) - Åkerman, Sune (1976). Theories and Methods of Migration Research in Runblom and Norman, From Sweden to America, 19–75. - American FactFinder, United States Census, 2000. Consulted 30 June 2007. - Andersson, Benny, and Ulvaeus, Björn. Kristina from Duvemåla (musical), consulted 7 May 2007. - Barton, H. Arnold (1994). A Folk Divided: Homeland Swedes and Swedish Americans, 1840–1940. Uppsala: Acta Universitatis Upsaliensis. - Barton, H. Arnold Swedish America in Fifty Years—2050, a paper read to the Swedish American Historical Society on the occasion of the 1996 celebration of the Swedish Immigration Jubilee. Consulted 7 May 2007. - Beijbom, Ulf. Chicago, the Essence of the Promised Land at the Swedish Emigrant Institute. Click on "History", then "Chicago." Consulted 6 May 2007. - Beijbom, Ulf (1996). A Review of Swedish Emigration to America at AmericanWest.com, consulted 2 February 2007. - Brattne, Berit, and Sune Åkerman (1976). The Importance of the Transport Sector for Mass Emigration in Runblom and Norman, From Sweden to America, 176–200. - Cipolla, Carlo (1966). Literacy and Development in the West. Harmondsworth. - Elovson, Harald (1930). Amerika i svensk litteratur 1750–1820. Lund. - Glynn, Irial: Emigration Across the Atlantic: Irish, Italians and Swedes compared, 1800-1950 , European History Online, Mainz: Institute of European History, 2011, retrieved: June 16, 2011. - Kälvemark, Ann-Sofie (1976). Swedish Emigration Policy in an International Perspective, 1840–1925, in Runblom and Norman, From Sweden to America, 94–113. - Norman, Hans (1976). The Causes of Emigration in Runblom and Norman, From Sweden to America, 149–164. - Runblom, Harald, and Hans Norman (eds.) (1976). From Sweden to America: A History of the Migration. Minneapolis: University of Minnesota Press. - Scott, Franklin D. (1965). Sweden's Constructive Opposition to Emigration, Journal of Modern History, Vol. 37, No. 3. (Sep., 1965), 307–335. in JSTOR - The Swedish Emigrant Institute. Consulted 30 June 2007. - Swenson Center, a research institute at Augustana College, Illinois. Consulted 7 May 2007. Media related to Immigration to the United States from Sweden at Wikimedia Commons Press - The New Sweden Centre — museum, tours and reenactors - The Swedish-American Historical Society is a non-profit organization founded in 1948 to "Record the Achievements of the Swedish Pioneers." The society publishes the academic journal The Swedish-American Historical Quarterly - The Swedish Emigration to America - The Emigrant Routes to the Promised Land in America - The Journey To America - Sillgatan: The Emigrant Path through Göteborg - Story of 3 sisters emigrating to America from Sweden - From Sweden To America 1996 CD: 23 of the 31 tracks on the vinyl release. - From Sweden To America 1981 LP: available in digital format at iTunes and Amazon mp3. - First Swedish Settlers in Wisconsin Wisconsin Historical Markers
https://en.wikipedia.org/wiki/Swedish_emigration_to_the_United_States
4.0625
Tornadoes – also known as cyclones or twisters – are rotating columns of air that run between the ground and the clouds above. Weak, short-lived tornadoes can occur when there's a strong updraft within a thunderstorm, though the most powerful and devastating twisters found in a few areas of the world require very specific conditions: a "supercell" thunderstorm with a rotating area called a mesocyclone, and winds that shear, increasing and shifting direction with height. Although the number of reported tornadoes has increased over the past few decades, scientists believe this is simply because more are being documented (partly thanks to the rise of "storm chasing" as a hobby), rather than because climate change or any other factor has made them more frequent. This fits with the fact that US reports of violent tornadoes – the kind that are hard to miss, even without storm chasing – haven't changed significantly in the entire century-long record, holding firm at around 10–20 per year. As for the future, there's no compelling reason to expect tornadoes to become much more frequent or intense due to global warming – though climate change could have some impact on when and where the occur. For example, it's possible that "tornado season" (generally early spring in the US South and late spring to summer in the Midwest) may shift a bit earlier, and the secondary autumn season could extend later. But it's also possible, according to recent research, that warming will reduce the frequency with which the required conditions for powerful tornadoes will co-exist. While the atmosphere is generally getting warmer and moister, which can boost the instability that fuels storms, it's also possible that the wind shear that organises tornadic storms will decrease. This could tip the balance away from tornadoes and towards other thunderstorm extremes, such as heavy rain. The ultimate climate change FAQ This editorial is free to reproduce under Creative Commons This post by The Guardian is licensed under a Creative Commons Attribution-No Derivative Works 2.0 UK: England & Wales License. Based on a work at theguardian.com
http://www.theguardian.com/environment/2011/jun/01/tornadoes-climate-change?view=mobile
4.03125
Dead zone (ecology) Dead zones are hypoxic (low-oxygen) areas in the world's oceans and large lakes, caused by "excessive nutrient pollution from human activities coupled with other factors that deplete the oxygen required to support most marine life in bottom and near-bottom water. (NOAA)." In the 1970s oceanographers began noting increased instances of dead zones. These occur near inhabited coastlines, where aquatic life is most concentrated. (The vast middle portions of the oceans, which naturally have little life, are not considered "dead zones".) In March 2004, when the recently established UN Environment Programme published its first Global Environment Outlook Year Book (GEO Year Book 2003), it reported 146 dead zones in the world's oceans where marine life could not be supported due to depleted oxygen levels. Some of these were as small as a square kilometre (0.4 mi²), but the largest dead zone covered 70,000 square kilometres (27,000 mi²). A 2008 study counted 405 dead zones worldwide. - 1 Causes - 2 Effects - 3 Locations - 4 Energy Independence and Security Act of 2007 - 5 Reversal - 6 See also - 7 Notes - 8 References - 9 Further reading - 10 External links Aquatic and marine dead zones can be caused by an increase in chemical nutrients (particularly nitrogen and phosphorus) in the water, known as eutrophication. These chemicals are the fundamental building blocks of single-celled, plant-like organisms that live in the water column, and whose growth is limited in part by the availability of these materials. Eutrophication can lead to rapid increases in the density of certain types of these phytoplankton, a phenomenon known as an algal bloom. "The fish-killing blooms that devastated the Great Lakes in the 1960s and 1970s haven't gone away; they've moved west into an arid world in which people, industry, and agriculture are increasingly taxing the quality of what little freshwater there is to be had here....This isn't just a prairie problem. Global expansion of dead zones caused by algal blooms is rising rapidly...(Schindler and Vallentyne 2008) " The major groups of algae are Cyanobacteria, Green Algae, Dinoflagellates, Coccolithophores and Diatom Algae. Increase in input of nitrogen and phosphorus generally causes Cyanobacteria to bloom and this causes Dead Zones. Cyanobacteria are not good food for zooplankton and fish and hence accumulate in water, die, and then decompose. Other algae are consumed and hence do not accumulate to the same extent as Cyanobacteria. Dead zones can be caused by natural and by anthropogenic factors. Use of chemical fertilizers is considered the major human-related cause of dead zones around the world. Natural causes include coastal upwelling and changes in wind and water circulation patterns. Runoff from sewage, urban land use, and fertilizers can also contribute to eutrophication. Notable dead zones in the United States include the northern Gulf of Mexico region, surrounding the outfall of the Mississippi River, and the coastal regions of the Pacific Northwest, and the Elizabeth River in Virginia Beach, all of which have been shown to be recurring events over the last several years. Additionally, natural oceanographic phenomena can cause deoxygenation of parts of the water column. For example, enclosed bodies of water, such as fjords or the Black Sea, have shallow sills at their entrances, causing water to be stagnant there for a long time. The eastern tropical Pacific Ocean and northern Indian Ocean have lowered oxygen concentrations which are thought to be in regions where there is minimal circulation to replace the oxygen that is consumed (e.g. Pickard & Emery 1982, p 47). These areas are also known as oxygen minimum zones (OMZ). In many cases, OMZs are permanent or semipermanent areas. Remains of organisms found within sediment layers near the mouth of the Mississippi River indicate four hypoxic events before the advent of artificial fertilizer. In these sediment layers, anoxia-tolerant species are the most prevalent remains found. The periods indicated by the sediment record correspond to historic records of high river flow recorded by instruments at Vicksburg, Mississippi. Changes in ocean circulation triggered by ongoing climate change could also add or magnify other causes of oxygen reductions in the ocean In a study of the Gulf killifish by the Southeastern Louisiana University done in three bays along the Gulf Coast, fish living in bays where the oxygen levels in the water dropped to 1 to 2 parts per million (ppm) for three or more hours per day were found to have smaller reproductive organs. The male gonads were 34% to 50% as large as males of similar size in bays where the oxygen levels were normal (6 to 8 ppm). Females were found to have ovaries that were half as large as those in normal oxygen levels. The number of eggs in females living in hypoxic waters were only one-seventh the number of eggs in fish living in normal oxygen levels. (Landry, et al., 2004) Fish raised in laboratory-created hypoxic conditions showed extremely low sex hormone concentrations and increased elevation of activity in two genes triggered by the hypoxia-inductile factor (HIF) protein. Under hypoxic conditions, HIF pairs with another protein, ARNT. The two then bind to DNA in cells, activating genes in those plant cells. Under normal oxygen conditions, ARNT combines with estrogen to activate genes. Hypoxic cells in vitro did not react to estrogen placed in the tube. HIF appears to render ARNT unavailable to interact with estrogen, providing a mechanism by which hypoxic conditions alter reproduction in fish. (Johanning, et al., 2004) It might be expected that fish would flee this potential suffocation, but they are often quickly rendered unconscious and doomed. Slow moving bottom-dwelling creatures like clams, lobsters and oysters are unable to escape. All colonial animals are extinguished. The normal re-mineralization and recycling that occurs among benthic life-forms is stifled. Mora et al. 2013 showed that future changes in oxygen could effect most marine ecosystems and have socio-economic ramifications due to human dependency on marine goods and services. In the 1970s, marine dead zones were first noted in European settled areas where intensive economic use stimulated scientific scrutiny: in the U.S. East Coast's Chesapeake Bay, in Scandinavia's strait called the Kattegat, which is the mouth of the Baltic Sea and in other important Baltic Sea fishing grounds, in the Black Sea, (which may have been anoxic in its deepest levels for millennia, however) and in the northern Adriatic. |This section requires expansion. (August 2013)| A dead zone exists in the central part of Lake Erie from east of Point Pelee to Long Point and stretches to shores in Canada and the United States. The zone has been noticed since the 1950s to 1960s, but efforts since the 1970s have been made by Canada and the US to reduce runoff pollution into the lake as means to reverse the dead zone growth. Overall the lake's oxygen level is poor with only a small area to the east of Long Point that has better levels. The biggest impact of the poor oxygen levels is to lacustrine life and fisheries industry. Lower St. Lawrence Estuary A dead zone exists in the Lower St. Lawrence River area from east the Saguenay River to east of Baie Comeau, greatest at depths over 275 metres (902 ft) and noticed since the 1930s. The main concerns for Canadian scientists is the impact of fish found in the area. Off the coast of Cape Perpetua, Oregon, there is also a dead zone with a 2006 reported size of 300 square miles (780 km²). This dead zone only exists during the summer, perhaps due to wind patterns. The Oregon coast has also seen hypoxic water transporting itself from the continental shelf to the coastal embayments. This has seemed to cause intensity in several areas of Oregon's climate such as upwelled water containing oxygen concentration and upwelled winds. Gulf of Mexico 'Dead Zone' The area of temporary hypoxic bottom water that occurs most summers off the coast of Louisiana in the Gulf of Mexico is the largest recurring hypoxic zone in the United States. The Mississippi River, which is the drainage area for 41% of the continental United States, dumps high-nutrient runoff such as nitrogen and phosphorus into the Gulf of Mexico. According to a 2009 fact sheet created by NOAA, "seventy percent of nutrient loads that cause hypoxia are a result of this vast drainage basin." which includes the heart of U.S. agribusiness, the Midwest. The discharge of treated sewage from urban areas (pop. c 12 million in 2009) combined with agricultural runoff deliver c. 1.7 million tons of phosphorus and nitrogen into the Gulf of Mexico every year. Size of Gulf of Mexico 'Dead Zone' The area of hypoxic bottom water that occurs for several weeks each summer in the Gulf of Mexico has been mapped most years from 1985 through 2014. The size varies annually from a record high in 2002 when it encompassed more than 21,756 sq kilometers (8,400 square miles) to a record low in 1988 of 39 sq kilometers (15 square miles). Nancy Rabalais of the Louisiana Universities Marine Consortium in Cocodrie predicted the dead zone or hypoxic zone in 2012 will cover an area of 17,353 sq kilometers (6,700 square miles) which is larger than Connecticut; however, when the measurements were completed, the area of hypoxic bottom water in 2012 only totaled 7480 sq kilometers. The models using the nitrogen flux from the Mississippi River to predict the "dead zone" areas have been criticized for being systematically high from 2006 to 2014, having predicted record areas in 2007, 2008, 2009, 2011, and 2013 that were never realized. In late summer 1988 the dead zone disappeared as the great drought caused the flow of Mississippi to fall to its lowest level since 1933. During times of heavy flooding in the Mississippi River Basin, as in 1993, "the "dead zone" dramatically increased in size, approximately 5,000 km (3,107 mi) larger than the previous year." Economic Impact of Gulf of Mexico 'Dead Zone' Some assert that the dead zone threatens lucrative commercial and recreational fisheries in the Gulf of Mexico. "In 2009, the dockside value of commercial fisheries in the Gulf was $629 million. Nearly three million recreational fishers further contributed about $10 billion to the Gulf economy, taking 22 million fishing trips." Scientists are not in universal agreement that nutrient loading has a negative impact on fisheries. Grimes makes a case that nutrient loading enhances the fisheries in the Gulf of Mexico. Courtney et al. assert that nutrient loading has made significant contributions to the increases in red snapper production in the northern and western Gulf of Mexico. History of Gulf of Mexico 'Dead Zone' Shrimp trawlers first reported a 'dead zone' in the Gulf of Mexico in 1950 but it wasn't until 1970 when the size of the hypoxic zone had increased that scientists began to investigate. The conversion of forests and wetlands for agricultural and urban developments accelerated after 1950. "Missouri River Basin has had hundreds of thousands of acres of forests and wetlands (66 000 000acres) replaced with agriculture activity [. . .] In the Lower Mississippi one third of the valley's forests were converted to agriculture between 1950 and 1976." Energy Independence and Security Act of 2007 The Energy Independence and Security Act of 2007 calls for the production of 36 billion US gallons (140,000,000 m3) of renewable fuels by 2022, including 15 billion US gallons (57,000,000 m3) of corn-based ethanol, a tripling of current production that would require a similar increase in corn production. Unfortunately, the plan poses a new problem; the increase in demand for corn production results in a proportional increase in nitrogen runoff. Although nitrogen, which makes up 78% of the Earth's atmosphere, is an inert gas, it has more reactive forms, one of which is used to make fertilizer. According to Fred Below, a professor of crop physiology at the University of Illinois at Urbana-Champaign, corn requires more nitrogen-based fertilizer because it produces a higher grain per unit area than other crops and, unlike other crops, corn is completely dependent on available nitrogen in soil. The results, reported 18 March 2008 in Proceedings of the National Academy of Sciences, showed that scaling up corn production to meet the 15-billion-US-gallon (57,000,000 m3) goal would increase nitrogen loading in the Dead Zone by 10–18%. This would boost nitrogen levels to twice the level recommended by the Mississippi Basin/Gulf of Mexico Water Nutrient Task Force (Mississippi River Watershed Conservation Programs), a coalition of federal, state, and tribal agencies that has monitored the dead zone since 1997. The task force says a 30% reduction of nitrogen runoff is needed if the dead zone is to shrink. Dead zones are reversible, though the extinction of organisms that are lost due to its appearance is not. The Black Sea dead zone, previously the largest in the world, largely disappeared between 1991 and 2001 after fertilizers became too costly to use following the collapse of the Soviet Union and the demise of centrally planned economies in Eastern and Central Europe. Fishing has again become a major economic activity in the region. While the Black Sea "cleanup" was largely unintentional and involved a drop in hard-to-control fertilizer usage, the U.N. has advocated other cleanups by reducing large industrial emissions. From 1985 to 2000, the North Sea dead zone had nitrogen reduced by 37% when policy efforts by countries on the Rhine River reduced sewage and industrial emissions of nitrogen into the water. Other cleanups have taken place along the Hudson River and San Francisco Bay. - Aquatic Dead Zones NASA Earth Observatory. Revised 17 July 2010. Retrieved 17 January 2010. - "NOAA: Gulf of Mexico 'dead zone' predictions feature uncertainty". National Oceanic and Atmospheric Administration (NOAA). June 21, 2012. Retrieved June 23, 2012. - David Perlman, Chronicle Science Editor (2008-08-15). "Scientists alarmed by ocean dead-zone growth". Sfgate.com. Retrieved 2010-08-03. - Diaz, R. J.; Rosenberg, R. (2008-08-15). "Spreading Dead Zones and Consequences for Marine Ecosystems". Science 321 (5891): 926–9. doi:10.1126/science.1156401. PMID 18703733. - "Blooming horrible: Nutrient pollution is a growing problem all along the Mississippi". The Economist. Retrieved June 23, 2012. - David W. Schindler; John R. Vallentyne (2008). The Algal Bowl: Overfertilization of the World's Freshwaters and Estuaries. Edmonton, Alberta: University of Alberta Press. Retrieved June 23, 2012. - "Whole Lake Experiment, Ford Lake, Prof Lehman" - Corn boom could expand 'dead zone' in Gulf - Mora, C.; et al. (2013). "Biotic and Human Vulnerability to Projected Changes in Ocean Biogeochemistry over the 21st Century". PLOS Biology 11: e1001682. doi:10.1371/journal.pbio.1001682. PMC 3797030. PMID 24143135. - Diaz, R. J.; Rutger Rosenberg (August 15, 2008). "Supporting Online Material for Spreading Dead Zones and Consequences for Marine Ecosystems" (PDF). Science 321 (926): 926–9. doi:10.1126/science.1156401. PMID 18703733. Retrieved 2010-08-13. - "Dead Zones". - "Will "Dead Zones" Spread in the St. Lawrence River?". - Griffis, R. and Howard, J. [Eds.]. 2013. Oceans and Marine Resources in a Changing Climate: A Technical Input to the 2013 National Climate Assessment. Washingtonn, DC: Island Press - "NOAA: Gulf of Mexico 'Dead Zone' Predictions Feature Uncertainty". U.S. Geological Survey (USGS). June 21, 2012. Retrieved June 23, 2012. - "What is hypoxia?". Louisiana Universities Marine Consortium (LUMCON). Retrieved May 18, 2013. - "Dead Zone: Hypoxia in the Gulf of Mexico" (PDF). NOAA. 2009. Retrieved June 23, 2012. - Lochhead, Carolyn (2010-07-06). "Dead zone in gulf linked to ethanol production". San Francisco Chronicle. Retrieved 2010-07-28. - Courtney et al. Predictions Wrong Again on Dead Zone Area - Gulf of Mexico Gaining Resistance to Nutrient Loading. http://arxiv.org/ftp/arxiv/papers/1307/1307.8064.pdf - Lisa M. Fairchild (2005). The influence of stakeholder groups on the decision making process regarding the dead zone associated with the Mississippi river discharge (Master of Science). University of South Florida (USF). p. 14. - Grimes, C. B. Fishery production and the Mississippi River discharge. Fisheries (2001) 26(8), 17-26. - Courtney et al. Nutrient Loading Increases Red Snapper Production in the Gulf of Mexico. http://hy-ls.org/index.php/hyls/article/view/100/87 - Jennie Biewald; Annie Rossetti; Joseph Stevens; Wei Cheih Wong. The Gulf of Mexico's Hypoxic Zone (Report). - Cox, Tony (2007-07-23). "Exclusive". Bloomberg.com. Retrieved 2010-08-03. - Potera, Carol (June 2008). "Corn Ethanol Goal Revives Dead Zone Concerns". Environmental Health Prospectives. - "Dead Water". Economist. May 2008. - Mee, Laurence (November 2006). "Reviving Dead Zones". Scientific American. - 'Dead Zones' Multiplying In World's Oceans by John Nielsen. 15 Aug 2008, Morning Edition, NPR. - "Wisconsin Department of Natural Resources" (PDF). Retrieved 2010-08-03. - Diaz, R.J.; Rosenberg, R. (2008). "Spreading dead zones and consequences for marine ecosystems". Science 321 (5891): 926–929. doi:10.1126/science.1156401. PMID 18703733. - Osterman, L.E., et al. 2004. Reconstructing an 180-yr record of natural and anthropogenic induced hypoxia from the sediments of the Louisiana Continental Shelf. Geological Society of America meeting. Nov. 7-10. Denver. Abstract. - Pickard, G.L. and Emery, W.J. 1982. Description Physical Oceanography: An Introduction. Pergamon Press, Oxford, 249 pp. - Landry, C.A., S. Manning, and A.O. Cheek. 2004. Hypoxia suppresses reproduction in Gulf killifish, Fundulus grandis. e.hormone 2004 conference. Oct. 27-30. New Orleans. - Johanning, K., et al. 2004. Assessment of molecular interaction between low oxygen and estrogen in fish cell culture. Fourth SETAC World Congress, 25th Annual Meeting in North America. Nov. 14-18. Portland, Ore. Abstract. - Taylor, F.J.; Taylor, N.J.; Walsby, J.R. (1985). "A bloom of planktonic diatom Ceratulina pelagica off the coastal northeastern New Zealand in 1983, and its contribution to an associated mortality of fish and benthic fauna". Intertional Revue ges. Hydrobiol 70: 773–795. doi:10.1002/iroh.19850700602. - Morrisey, D.J. (2000). "Predicting impacts and recovery of marine farm sites in Stewart Island New Zealand, from the Findlay-Watling model". Aquaculture 185: 257–271. doi:10.1016/s0044-8486(99)00360-9. - Potera, C (2008). "Corn Ethanol Goal Revives Dead Zone Concerns". Environmental Health Perspectives 116 (6): A242–A243. doi:10.1289/ehp.116-a242. - David Stauth (Oregon State University), "Hypoxic "dead zone" growing off the Oregon Coast" July 31, 2006 - Suzie Greenhalgh and Amanda Sauer (WRI), "Awakening the 'Dead Zone': An investment for agriculture, water quality, and climate change" 2003 - NutrientNet, an online nutrient trading tool developed by the World Resources Institute, designed to address issues of eutrophication. See also the PA NutrientNet website designed for Pennsylvania's nutrient trading program. - Reyes Tirado (July 2008) Dead Zones: How Agricultural Fertilizers are Killing our Rivers, Lakes and Oceans. Greenpeace publications. See also: "Dead Zones: How Agricultural Fertilizers are Killing our Rivers, Lakes and Oceans". Greenpeace Canada. 2008-07-07. Retrieved 2010-08-03. - MSNBC report on dead zones, March 29, 2004 - Joel Achenbach, "A 'Dead Zone' in The Gulf of Mexico: Scientists Say Area That Cannot Support Some Marine Life Is Near Record Size", Washington Post, July 31, 2008 - Joel Achenbach, "'Dead Zones' Appear In Waters Worldwide: New Study Estimates More Than 400", Washington Post, August 15, 2008 - Louisiana Universities Marine Consortium - UN Geo Yearbook 2003 report on nitrogen and dead zones - NASA on dead zones (Satellite pictures) - Gulf of Mexico Dead Zone - multimedia - Gulf of Mexico Hypoxia Watch, NOAA Joel Achenbach
https://en.wikipedia.org/wiki/Dead_zone_(ecology)
4.09375
||This article needs attention from an expert in Psychology. The specific problem is: There are several sections with few or no sources. (May 2014)| Shyness (also called diffidence) is the feeling of apprehension, lack of comfort, or awkwardness especially when a person is in proximity to other people. This commonly occurs in new situations or with unfamiliar people. Shyness can be a characteristic of people who have low self-esteem. Stronger forms of shyness are usually referred to as social anxiety or social phobia. The primary defining characteristic of shyness is a largely ego-driven fear of what other people will think of a person's behavior. This results in a person becoming scared of doing or saying what he or she wants to out of fear of negative reactions, being laughed at or humiliated, criticism, and/or rejection. A shy person may simply opt to avoid social situations instead. One important aspect of shyness is social skills development. Schools and parents may implicitly assume children are fully capable of effective social interaction. Social skills training is not given any priority (unlike reading and writing) and as a result, shy students are not given an opportunity to develop their ability to participate in class and interact with peers. Teachers can model social skills and ask questions in a less direct and intimidating manner in order to gently encourage shy students to speak up in class, and make friends with other children. - 1 Origins - 2 Personality trait - 3 Concepts - 4 Misconceptions and negative aspects - 5 Benefits - 6 Different cultural views - 7 Intervention and treatment - 8 See also - 9 References - 10 Bibliography - 11 External links The initial cause of shyness varies. Scientists believe that they have located genetic data supporting the hypothesis that shyness is, at least, partially genetic. However, there is also evidence that suggests the environment in which a person is raised can also be responsible for his or her shyness. This includes child abuse, particularly emotional abuse such as ridicule. Shyness can originate after a person has experienced a physical anxiety reaction; at other times, shyness seems to develop first and then later causes physical symptoms of anxiety. Shyness differs from social anxiety, which is a broader, often depression-related psychological condition including the experience of fear, apprehension or worrying about being evaluated by others in social situations to the extent of inducing panic. Shyness may come from genetic traits, the environment in which a person is raised and personal experiences. Shyness may merely be a personality trait or can occur at certain stages of development in children. Genetics and heredity Shyness is often seen as a hindrance on people and their development. The cause of shyness is often disputed but it is found that fear is positively related to shyness, suggesting that fearful children are much more likely to develop being shy as opposed to less fearful children. Shyness can also be seen on a biological level as a result of an excess of cortisol. When cortisol is present in greater quantities it is known to suppress an individual’s immune system, making them more susceptible to illness and disease. The genetics of shyness is a relatively small area of research that has been receiving an even smaller amount of attention, although papers on the biological bases of shyness date back to 1988. Some research has indicated that shyness and aggression are related—through long and short forms of the gene DRD4, though considerably more research on this is needed. Further, it has been suggested that shyness and social phobia (the distinction between the two is becoming ever more blurred) are related to obsessive-compulsive disorder. As with other studies of behavioral genetics, the study of shyness is complicated by the number of genes involved in, and the confusion in defining, the phenotype. Naming the phenotype – and translation of terms between genetics and psychology — also causes problems. Several genetic links to shyness are current areas of research. One is the serotonin transporter promoter region polymorphism (5-HTTLPR), the long form of which has been shown to be modestly correlated with shyness in grade school children. Previous studies had shown a connection between this form of the gene and both obsessive-compulsive disorder and autism. Mouse models have also been used, to derive genes suitable for further study in humans; one such gene, the glutamic acid decarboxylase gene (which encodes an enzyme that functions in GABA synthesis), has so far been shown to have some association with behavioral inhibition. Another gene, the dopamine D4 receptor gene (DRD4) exon III polymorphism, had been the subject of studies in both shyness and aggression, and is currently the subject of studies on the "novelty seeking" trait. A 1996 study of anxiety-related traits (shyness being one of these) remarked that, "Although twin studies have indicated that individual variation in measures of anxiety-related personality traits is 40-60% heritable, none of the relevant genes has yet been identified," and that "10 to 15 genes might be predicted to be involved" in the anxiety trait. Progress has been made since then, especially in identifying other potential genes involved in personality traits, but there has been little progress made towards confirming these relationships. The long version of the 5-HTT gene-linked polymorphic region (5-HTTLPR) is now postulated to be correlated with shyness, but in the 1996 study, the short version was shown to be related to anxiety-based traits. As a symptom of mercury poisoning Excessive shyness, embarrassment, self-consciousness and timidity, social-phobia and lack of self-confidence are also components of erethism, which is a symptom complex that appears in cases of mercury poisoning. Mercury poisoning was common among hat makers in England in the 18th and 19th centuries, who used mercury to stabilize wool into felt fabric. The prevalence of shyness in some children can be linked to day length during pregnancy, particularly during the midpoint of prenatal development. An analysis of longitudinal data from children living at specific latitudes in the United States and New Zealand revealed a significant relationship between hours of day length during the midpoint of pregnancy and the prevalence of shyness in children. "The odds of being classified as shy were 1.52 times greater for children exposed to shorter compared to longer daylengths during gestation." In their analysis, scientists assigned conception dates to the children relative to their known birth dates, which allowed them to obtain random samples from children who had a mid-gestation point during the longest hours of the year and the shortest hours of the year (June and December, depending on whether the cohorts were in the United States or New Zealand). The longitudinal survey data included measurements of shyness on a five-point scale based on interviews with the families being surveyed, and children in the top 25th percentile of shyness scores were identified. The data revealed a significant co-variance between the children who presented as being consistently shy over a two-year period, and shorter day length during their mid-prenatal development period. "Taken together, these estimates indicate that about one out of five cases of extreme shyness in children can be associated with gestation during months of limited daylength." Low birth weights In recent years correlations between birth weight and shyness have been studied. Findings suggest that those born at low birth weights are more likely to be shy, risk-aversive and cautious compared to those born at normal birth weights. These results do not however imply a cause-and-effect relationship. Shyness is most likely to occur during unfamiliar situations, though in severe cases it may hinder an individual in his or her most familiar situations and relationships as well. Shy people avoid the objects of their apprehension in order to keep from feeling uncomfortable and inept; thus, the situations remain unfamiliar and the shyness perpetuates itself. Shyness may fade with time; e.g., a child who is shy towards strangers may eventually lose this trait when older and become more socially adept. This often occurs by adolescence or young adulthood (generally around the age of 13). In some cases, though, it may become an integrated, lifelong character trait. Longitudinal data suggests that the three different personality types evident in infancy easy, slow-to-warm-up, and difficult tend to change as children mature. Extreme traits become less pronounced, and personalities evolve in predictable patterns over time. What has been proven to remain constant is the tendency to internalize or externalize problems. This relates to individuals with shy personalities because they tend to internalize their problems, or dwell on their problems internally instead of expressing their concerns, which leads to disorders like depression and anxiety. Humans experience shyness to different degrees and in different areas. Shyness can also be seen as an academic determinant. It has been determined that there is a negative relationship between shyness and classroom performance. As the shyness of an individual increased, classroom performance was seen to decrease. Shyness may involve the discomfort of difficulty in knowing what to say in social situations, or may include crippling physical manifestations of uneasiness. Shyness usually involves a combination of both symptoms, and may be quite devastating for the sufferer, in many cases leading them to feel that they are boring, or exhibit bizarre behavior in an attempt to create interest, alienating them further. Behavioral traits in social situations such as smiling, easily producing suitable conversational topics, assuming a relaxed posture and making good eye contact, may not be second nature for a shy person. Such people might only affect such traits by great difficulty, or they may even be impossible to display. Those who are shy are perceived more negatively, in cultures that value sociability, because of the way they act towards others. Shy individuals are often distant during conversations, which can result in others to forming poor impressions of them. People who are not shy may be up-front, aggressive, or critical towards shy people in an attempt "to get them out of their shell." This can actually make a shy person feel worse, as it draws attention to them, making them more self-conscious and uncomfortable. Shyness vs. introversion The term shyness may be implemented as a lay blanket-term for a family of related and partially overlapping afflictions, including timidity (apprehension in meeting new people), bashfulness and diffidence (reluctance in asserting oneself), apprehension and anticipation (general fear of potential interaction), or intimidation (relating to the object of fear rather than one's low confidence). Apparent shyness, as perceived by others, may simply be the manifestation of reservation or introversion, character traits which cause an individual to voluntarily avoid excessive social contact or be terse in communication, but are not motivated or accompanied by discomfort, apprehension, or lack of confidence. Rather, according to professor of psychology Bernardo J. Carducci, introverts choose to avoid social situations because they derive no reward from them or may find surplus sensory input overwhelming, whereas shy people may fear such situations. Research using the statistical techniques of factor analysis and correlation have found shyness overlaps mildly with both introversion and neuroticism (i.e., negative emotionality). Low societal acceptance of shyness or introversion may reinforce a shy or introverted individual's low self-confidence. Both shyness and introversion can outwardly manifest with socially withdrawn behaviors, such as tendencies to avoid social situations, especially when they are unfamiliar. A variety of research suggests that shyness and introversion possess clearly distinct motivational forces and lead to uniquely different personal and peer reactions and therefore cannot be described as theoretically the same, with Susan Cain's Quiet (2012) further discerning introversion as involving being differently social (preferring one-on-one or small group interactions) rather than being anti-social altogether. Research suggests that no unique physiological response, such as an increased heart beat, accompanies socially withdrawn behavior in familiar compared with unfamiliar social situations. But unsociability leads to decreased exposure to unfamiliar social situations and shyness causes a lack of response in such situations, suggesting that shyness and unsociability affect two different aspects of sociability and are distinct personality traits. In addition, different cultures perceive unsociability and shyness in different ways, leading to either positive or negative individual feelings of self-esteem. Collectivist cultures view shyness as a more positive trait related to compliance with group ideals and self-control, while perceiving chosen isolation (introverted behavior) negatively as a threat to group harmony; and because collectivist society accepts shyness and rejects unsociability, shy individuals develop higher self-esteem than introverted individuals. On the other hand, individualistic cultures perceive shyness as a weakness and a character flaw, while unsociable personality traits (preference to spend time alone) are accepted because they uphold the value of autonomy; accordingly, shy individuals tend to develop low self-esteem in Western cultures while unsociable individuals develop high self-esteem. An extreme case of shyness is identified as a psychiatric illness, which made its debut as social phobia in DSM-III in 1980, but was then described as rare. By 1994, however, when DSM-IV was published, it was given a second, alternative name in parentheses (social anxiety disorder) and was now said to be relatively common, affecting between 3 and 13% of the population at some point during their lifetime. Studies examining shy adolescents and university students found that between 12 and 18% of shy individuals meet criteria for social anxiety disorder. Shyness affects people mildly in unfamiliar social situations where one feels anxiety about interacting with new people. Social anxiety disorder, on the other hand, is a strong irrational fear of interacting with people, or being in situations which may involve public scrutiny, because one feels overly concerned about being criticized if one embarrasses oneself. Physical symptoms of social phobia can include shortness of breath, trembling, increased heart rate, and sweating; in some cases, these symptoms are intense enough and numerous enough to constitute a panic attack. Shyness, on the other hand, may incorporate many of these symptoms, but at a lower intensity, infrequently, and does not interfere tremendously with normal living. Social inhibition vs. behavioral inhibition Those considered shy are also said to be socially inhibited. Social inhibition is the conscious or unconscious constraint by a person of behavior of a social nature. In other words, social inhibition is holding back for social reasons. There are different levels of social inhibition, from mild to severe. Being socially inhibited is good when preventing one from harming another and bad when causing one to refrain from participating in class discussions. Behavioral inhibition is a temperament or personality style that predisposes a person to become fearful, distressed and withdrawn in novel situations. This personality style is associated with the development of anxiety disorders in adulthood, particularly social anxiety disorder. Misconceptions and negative aspects Many misconceptions/stereotypes about shy individuals exist in western culture and negative peer reactions to "shy" behavior abound. This takes place because individualistic cultures place less value on quietness and meekness in social situations, and more often reward outgoing behaviors. Some misconceptions include viewing introversion and social phobia synonymous with shyness, and believing that shy people are less intelligent. No correlation (positive or negative) exists between intelligence and shyness. Research indicates that shy children have a harder time expressing their knowledge in social situations (which most modern curricula utilize) and because they do not engage actively in discussions, teachers view them as less intelligent. In line with social learning theory, an unwillingness to engage with classmates and teachers makes it more difficult for shy students to learn. Test scores, however, indicate that shyness is unrelated to actual academic knowledge, and therefore only academic engagement. Depending on the level of a teacher's own shyness, more indirect (vs. socially oriented) strategies are used with shy individuals to assess knowledge in the classroom, and accommodations are made. Observed peer evaluations of shy people during initial meeting and social interactions thereafter found that peers evaluate shy individuals as less intelligent during the first encounter. During subsequent interactions, however, peers perceived shy individuals' intelligence more positively. Thomas Benton claims that because shy people "have a tendency toward self-criticism, they are often high achievers, and not just in solitary activities like research and writing. Perhaps even more than the drive toward independent achievement, shy people long to make connections to others often through altruistic behavior." Susan Cain describes the benefits that shy people bring to society that US cultural norms devalue. Without characteristics that shy people bring to social interactions, such as sensitivity to the emotions of others, contemplation of ideas, and valuable listening skills, there would be no balance to society. In earlier generations, such as the 1950s, society perceived shyness as a more socially attractive trait, especially in women, indicating that views on shyness vary by culture. Sociologist Susie Scott challenged the interpretation and treatment of shyness as being pathological. "By treating shyness as an individual pathology, ... we forget that this is also a socially oriented state of mind that is socially produced and managed." She explores the idea that "shyness is a form of deviance: a problem for society as much as for the individual", and concludes that, to some extent, "we are all impostors, faking our way through social life". One of her interview subjects (self-defined as shy) puts this point of view even more strongly: "Sometimes I want to take my cue from the militant disabled lobbyists and say, 'hey, it's not MY problem, it's society's'. I want to be proud to be shy: on the whole, shys are probably more sensitive, and nicer people, than 'normals'. I shouldn't have to change: society should adapt to meet my needs." Different cultural views In cultures that value outspokenness and overt confidence, shyness can be perceived as weakness. To an unsympathetic observer, a shy individual may be mistaken as cold, distant, arrogant or aloof, which can be frustrating for the shy individual. However, in other cultures, shy people may be perceived as being thoughtful, intelligent, as being good listeners, and as being more likely to think before they speak. In cultures that value autonomy, shyness is often analyzed in the context of being a social dysfunction, and is frequently contemplated as a personality disorder or mental health issue. Some researchers are beginning to study comparisons between individualistic and collectivistic cultures, to examine the role that shyness might play in matters of social etiquette and achieving group-oriented goals. "Shyness is one of the emotions that may serve as behavioral regulators of social relationships in collectivistic cultures. For example, social shyness is evaluated more positively in a collectivistic society, but negatively evaluated in an individualistic society." In a cross-cultural study of Chinese and Canadian school children, researchers sought to measure several variables related to social reputation and peer relationships, including "shyness-sensitivity." Using peer nomination questionnaire, students evaluated their fellow students using positive and negative playmate nominations. "Shyness-sensitivity was significantly and negatively correlated with measures of peer acceptance in the Canadian sample. Inconsistent with Western results, it was found that items describing shyness-sensitivity were separated from items assessing isolation in the factor structure for the Chinese sample. Shyness-sensitivity was positively associated with sociability-leadership and with peer acceptance in the Chinese sample." Perceptions of Western cultures In some Western cultures shyness-inhibition plays an important role in psychological and social adjustment. It has been found that shyness-inhibition is associated with a variety of maladaptive behaviors. Being shy or inhibited in Western cultures can result in rejection by peers, isolation and being viewed as socially incompetent by adults. However, research suggests that if social withdrawal is seen as a personal choice rather than the result of shyness, there are fewer negative connotations. British writer Arthur C. Benson felt shyness is not mere self-consciousness, but a primitive suspicion of strangers, the primeval belief that their motives are predatory, with shyness a sinister quality which needs to be uprooted. He believed the remedy is for the shy to frequent society for courage from familiarity. Also, he claimed that too many shy adults take refuge in a critical attitude, engaging in brutal onslaughts on inoffensive persons. He felt that a better way is for the shy to be nice, to wonder what others need and like, interest in what others do or are talking about, friendly questions, and sympathy. For Charles Darwin shyness was an ‘odd state of mind’ appearing to offer no benefit to our species, and since the 1970s the modern tendency in psychology has been to see shyness as pathology. However, evolutionary survival advantages of careful temperaments over adventurous temperaments in dangerous environments have also been recognized. Perceptions of Eastern cultures In Eastern cultures shyness-inhibition in school-aged children is seen as positive and those that exhibit these traits are viewed well by peers and are accepted. They tended to be seen as competent by their teachers, to perform well in school and to show well-being. Shy individuals are also more likely to attain leadership status in school. Being shy or inhibited does not correlate with loneliness or depression as those in the West do. In Eastern cultures being shy and inhibited is a sign of politeness, respectfulness, and thoughtfulness. Examples of cultural views on shyness and inhibition In Hispanic cultures shyness and inhibition with authority figures is common. For instance, Hispanic students may feel shy towards being praised by teachers in front of others, because in these cultures students are rewarded in private with a touch, a smile, or spoken word of praise. Hispanic students may seem shy when they are not. It is considered rude to excel over peers and siblings; therefore it is common for Hispanic students to be reserved in classroom settings. Adults also show reluctance to share personal matters about themselves to authority figures such as nurses and doctors. Cultures in which the community is closed and based on agriculture (Kenya, India, etc.) experience lower social engagement than those in more open communities (United States, Okinawa, etc.) where interactions with peers is encouraged. Children in Mayan, Indian, Mexican, and Kenyan cultures are less expressive in social styles during interactions and they spend little time engaged in socio-dramatic activities. They are also less assertive in social situations. Self-expression and assertiveness in social interactions are related to shyness and inhibition in that when one is shy or inhibited he or she exhibits little or no expressive tendencies. Assertiveness is demonstrated in the same way, being shy and inhibited lessen one's chances of being assertive because of a lack of confidence. In the Italian culture emotional expressiveness during interpersonal interaction is encouraged. From a young age children engage in debates or discussions that encourage and strengthen social assertiveness. Independence and social competence during childhood is also promoted. Being inhibited is looked down upon and those who show this characteristic are viewed negatively by their parents and peers. Like other cultures where shyness and inhibition is viewed negatively, peers of shy and inhibited Italian children reject the socially fearful, cautious and withdrawn. These withdrawn and socially fearful children express loneliness and believe themselves to be lacking the social skills needed in social interactions. Intervention and treatment Psychological methods and pharmaceutical drugs are commonly used to treat shyness in individuals who feel crippled because of low self-esteem and psychological symptoms, such as depression or loneliness. According to research, early intervention methods that expose shy children to social interactions involving team work, especially team sports, decrease their anxiety in social interactions and increase their all around self-confidence later on. Implementing such tactics could prove to be an important step in combating the psychological effects of shyness that make living normal life difficult for anxious individuals. - People skills - Social anxiety - Social phobia - Selective mutism - Avoidant personality disorder - Highly sensitive person - Medicalization of behaviors as illness - "Shyness and social phobia". Royal College of Psychiatrists. 2012. Retrieved 17 January 2014. - Coplan, R. J.; Arbeau, K. A. (2008). "The Stresses of a "Brave New World": Shyness and School Adjustment in Kindergarten". Journal of Research in Childhood Education 22 (4): 377. doi:10.1080/02568540809594634. - Eggum, Natalie; Eisenberg, Nancy; Spinrad, Tracy; Reiser, Mark; Gaertner, Bridget; Sallquist, Julie; Smith, Cynthia (2009). "Development of Shyness: Relations with Children's Fearfulness, Sex, and Maternal Behavior". Infancy 14 (3): 325–345. doi:10.1080/15250000902839971. PMC 2791465. PMID 20011459. - Chung, Joanna Y.Y.; Evans, Mary Ann (2000). "Shyness and symptoms of illness in young children". Canadian Journal of Behavioural Science/Revue canadienne des sciences du comportement 32: 49. doi:10.1037/h0087100. - Arbelle, Shoshana; Benjamin, Jonathan; Golin, Moshe; Kremer, Ilana; Belmaker, Robert H.; Ebstein, Richard P. (April 2003). "Relation of shyness in grade school children to the genotype for the long form of the serotonin transporter promoter region polymorphism". American Journal of Psychiatry 160 (4): 671–676. doi:10.1176/appi.ajp.160.4.671. PMID 12668354. - Brune, CW; Kim, SJ; Salt, J; Leventhal, BL; Lord, C; Cook Jr, EH (2006). "5-HTTLPR Genotype-Specific Phenotype in Children and Adolescents with Autism". The American Journal of Psychiatry 163 (12): 2148–56. doi:10.1176/appi.ajp.163.12.2148. PMID 17151167. - Smoller, Jordan W.; Rosenbaum, Jerold F.; Biederman, Joseph; Susswein, Lisa S.; Kennedy, John; Kagan, Jerome; Snidman, Nancy; Laird, Nan; Tsuang, Ming T.; Faraone, Stephen V.; Schwarz, Alysandra; Slaugenhaupt, Susan A. (2001). "Genetic association analysis of behavioral inhibition using candidate loci from mouse models". American Journal of Medical Genetics 105 (3): 226–235. doi:10.1002/ajmg.1328. PMID 11353440. - Lesch et al. 1996. - WHO (1976) Environmental Health Criteria 1: Mercury, Geneva, World Health Organization, 131 pp. - WHO. Inorganic mercury. Environmental Health Criteria 118. World Health Organization, Geneva, 1991. - Gortmaker, SL. et al. Daylength during pregnancy and shyness in children: results from northern and southern hemispheres. 1997. - U.S, News Staff (9 July 2008). "Do Underweight Newborns Make for Shy Adult". Retrieved 14 March 2013. - Janson, H.; Matheisen, K.S. (2008). "Temperament profiles from infancy to middle childhood: Developmentand associations with behavior problems". Developmental Psychology 44 (5): 1314–1328. doi:10.1037/a0012713. - Coplan, R. J.; Rose-Krasnor, L.; Weeks, M.; Kingsbury, A.; Kingsbury, M.; Bullock, A. (2012). "Alone is a crowd: Social motivations, social withdrawal, and socioemotional functioning in later childhood". Developmental Psychology. doi:10.1037/a0028861. - Chisti, Saeed-ul-Hasan; Anwar, Saeed; Babar Khan, Shahinshah (2011). "Relationship between shyness and classroom performance at graduation level in Pakistan". Interdisciplinary Journal of Contemporary Research In Business 3 (4): 532–538. - Paulhus, D.L.; Morgan, K.L. (1997). "Perceptions of intelligence in leaderless groups: The dynamic effects of shyness and acquaintance". Journal of Personality and Social Psychology 72 (3): 581–591. doi:10.1037/0022-35126.96.36.1991. PMID 9120785. - "Shy | Define Shy at Dictionary.com". Dictionary.reference.com. Retrieved 2012-08-13. - Whitten, Meredith (2001-08-21). "All About Shyness". Psych Central. Retrieved 2012-08-13. - Crazier, W. R. (1979). "Shyness as a dimension of personality". British Journal of Social and Clinical Psychology 18: 121. doi:10.1111/j.2044-8260.1979.tb00314.x. - Heiser, N. A.; Turner, S. M.; Beidel, D. C. (2003). "Shyness: Relationship to social phobia and other psychiatric disorders". Behaviour research and therapy 41 (2): 209–21. PMID 12547381. - Shiner, R.; Caspi, A. (2003). "Personality differences in childhood and adolescence: Measurement, development, and consequences". Journal of Child Psychology and Psychiatry 44: 2–32. doi:10.1111/1469-7610.00101. PMID 12553411. - Susan Cain's Quiet (2012) - Asendorpf, J.B.; Meier, G.H. (1993). "Personality effects on children's speech in everyday life: Sociability-mediated exposure and shyness-mediated reactivity to social situations". Journal of Personality and Social Psychology 64 (6): 1072–1083. doi:10.1037/0022-35188.8.131.522. - Chen, X.; Wang, L.; Cao, R. (2011). "Shyness-sensitivity and unsociability in rural Chinese children: Relations with social, school, and psychological adjustment". Child Development 82 (5): 1531–1543. doi:10.1111/j.1467-8624.2011.01616.x. - Cornish, Audie (interviewer) (30 January 2012). "Quiet, Please: Unleashing 'The Power Of Introverts'". NPR. Archived from the original on 3 March 2012. - Lane, C. Shyness: How Normal Behavior Became a Sickness. 2007. - American Psychiatric Association. (2000). Anxiety disorders. In Diagnostic and statistical manual of mental disorders (4th ed., text rev., pp. 450–456). Washington, D.C.: American Psychiatric Association. - R.E. Stone. Is the American Psychiatric Association in Bed with Big Pharma? 2011. - Chavira, D. A.; Stein, M. B.; Malcarne, V. L. (2002). "Scrutinizing the relationship between shyness and social phobia". Journal of anxiety disorders 16 (6): 585–98. PMID 12405519. - Burstein, M; Ameli-Grillon, L; Merikangas, K. R. (2011). "Shyness versus social phobia in US youth". PEDIATRICS 128 (5): 917–25. doi:10.1542/peds.2011-1434. PMC 3208958. PMID 22007009. - Heiser, N. A.; Turner, S. M.; Beidel, D. C. (2003). "Shyness: Relationship to social phobia and other psychiatric disorders". Behaviour research and therapy 41 (2): 209–21. PMID 12547381. - "Behavioral Inhibition as a childhood predictor of social anxiety, part 1". Andrew Kukes foundation for social anxiety. Retrieved 26 March 2013. - Ordoñez-Ortega, A.; Espinosa-Fernandez, L.; Garcia-Lopez, LJ; Muela-Martinez, JA (2013). "Behavioral Inhibition and Relationship with Childhood Anxiety Disorders/Inhibición Conductual y su Relación con los Trastornos de Ansiedad Infantil". Terapia Psicologica 31: 355–362. doi:10.4067/s0718-48082013000300010. - Hughes, K.; Coplan, R.J. (2010). "Exploring processes linking shyness and academic achievement in childhood. School Psychology Quarterly" 25 (4): 213–222. - Coplan, J.R.; Hughes, K.; Bosacki, S.; Rose-Krasnor, L. (2011). "Is silence golden? Elementary school teachers' strategies and beliefs regarding hypothetical shy/quiet and exuberant/talkative children". Journal of Educational Psychology 103 (4): 939–951. doi:10.1037/a0024551. - "All About Shyness". Psych Central. - Thomas H. Benton (24 May 2004). "Shyness and Academe". The Chronicle of Higher Education. Retrieved 20 October 2013. - Cain, Susan (25 June 2011). "Shyness: Evolutionary Tactic?". The New York Times. Archived from the original on 16 August 2013. - Scott 2007, p. 2. - Scott 2007, pp. 165, 174. - Scott 2007, p. 164. - Frijda, N.H., & Mesquita, B. Social roles and functions: A interaction functions of emotion. 1994. - Chen, X., Rubin, K., Sun, Y. Social Reputation and Peer Relationships in Chinese and Canadian Children: A Cross-Cultural Study. 1992. - Kenneth H. Rubin and Robert J. Coplan, ed. (2010). "10". The Development of Shyness and Social Withdrawal. New York, NY: The Guilford Press. pp. 213–227. ISBN 978-1-60623-522-5. Retrieved 17 January 2014. - p. 162, Benson, Arthur C. 1908. Arthur C. Benson At Large Number XI Shyness. Putnam’s Monthly and The Reader, A Magazine of Literature, Art and Life. Volume IV. New Rochelle, New York: G.P. Putnam’s Sons, The Knickerbocker Press. - pp. 162-165, Benson, Arthur C. 1908. Arthur C. Benson At Large Number XI Shyness. Putnam’s Monthly and The Reader, A Magazine of Literature, Art and Life. Volume IV. New Rochelle, New York: G.P. Putnam’s Sons, The Knickerbocker Press. - Moran, Joe (17 July 2013). "The crystalline wall". Aeon. Archived from the original on 16 August 2013. - "How the students' culture effects their behavior". Teaching from a Hispanic perspective a handbook for non-Hispanic adult educators. Retrieved 2 March 2013. - Rubin, Kenneth; Sheryl A. Hemphill; Xinyin Chen; Paul Hasting (May 2006). "A cross-cultural study of behavioral inhibition in toddlers: East-West-North-South" (PDF). International Journal of Behavioral Development. 3 30 (3): 119–125. doi:10.1177/0165025406066723. Retrieved 22 February 2013. - Findlay, L.C.; Coplan, R.J. (2008). "Come out and play: Shyness in childhood and the benefits of organized sports participation". Canadian Journal of Behavioural Science 40 (3): 153–161. doi:10.1037/0008-400x.40.3.153. - Crozier, W. R. (2001). Understanding Shyness: psychological perspectives. Basingstoke: Palgrave. ISBN 0-333-77371-3. - Keillor, Garrison. "Shy rights: why not pretty soon?". Happy to be Here. London: Faber. pp. 209–216. ISBN 0571146961. - Kluger, Z.; Siegfried, Z; Ebstein, R. P. (2002). "A meta-analysis of the association between DRD4 polymorphism and novelty seeking". Molecular Psychiatry 7 (7): 712–717. doi:10.1038/sj.mp.4001082. PMID 12192615. - Lane, Christopher (2008). Shyness: How Normal Behavior Became a Sickness. New Haven: Yale University Press. ISBN 9780300124460. - Lesch, Klaus-Peter; Bengal, Dietmar; Heils, Armin; Sabol, Sue Z.; Greenberg, Benjamin D.; Petri, Susanne; Benjamin, Jonathan; Muller, Clemens R.; Hamer, Dean H.; Murphy, Dennis L. (1996). "Association of anxiety-related traits with a polymorphism in the serotonin transporter gene regulatory region". Science 274 (5292): 1527–1531. Bibcode:1996Sci...274.1527L. doi:10.1126/science.274.5292.1527. PMID 8929413. - Miller, Rowland S.; Perlman, Daniel; Brehn, Sharon S. (2007). Intimate Relationships (4th ed.). Boston: McGraw-Hill. p. 430. ISBN 9780072938012. - Rubin, Kenneth H. (2003). The Friendship Factor. New York: Penguin Paperbacks. ISBN 0142001899. - Rubin, Kenneth H.; Coplan, Robert J. (2010). The Development of Shyness and Social Withdrawal. New York: Guilford. ISBN 1606235222. - Scott, Susie (2007). Shyness and Society: the illusion of competence. Basingstoke: Palgrave Macmillan. ISBN 9781403996039. - Zimbardo, Philip G. (1977). Shyness: what it is, what to do about it. Reading, Mass.: Addison-Wesley. ISBN 9780201550184. - Media related to Shyness at Wikimedia Commons - Lynn Henderson and Philip Zimbardo: "Shyness". Entry in Encyclopedia of Mental Health, Academic Press, San Diego, CA (in press) - Liebowitz Social Anxiety Scale (LSAS-SR)+ - SHY United - Information and support site with articles and community forums / chat room for shy people experience shyness and social anxiety - Shyness and Social Phobia - information from mental health charity The Royal College of Psychiatrists - Social Anxiety Anonymous / Social Phobics Anonymous - International network of 12 Step support groups for people suffering from shyness problems and/or social anxiety disorder/social phobia
https://en.wikipedia.org/wiki/Shyness
4.15625
October 19, 2012 Causes and Effects of Gender Inequality Throughout history, countless acts of gender inequality can be identified; the causes of these discriminating accounts can be traced back to different causes. The general morality of the inequity relies on a belief that men are superior to women; because of this idea, women have spent generations suffering under their counterparts. Also, a common expectation is that men tend to be more assertive and absolute because of their biological hormones or instinctive intellect. Another huge origin is sexual discrimination; even in the world today, many women are viewed by men as just sex objects rather than a real human being with standards and morals; due to carnal ideas, some men can also suffer from a gender stereotype. However it may occur, the main causes of gender differences can be traced back to a belief in male dominance, the biological hormones and intelligence of men and women, and sexual themes. The presumption of male dominance has existed for a lengthy time; in ancient Greece, men ruled the cities while the women had to support the home. Medieval society, much like Greece, was completely dominated by men; according to Sally Smith’s “Women and Power in the Late Medieval English Village: a reconsideration”, “Women carried out the majority of tasks that took place in the medieval house, such as cooking, cleaning and activities associated with child rearing.” During this era, men set a list of laws that prohibited women from marrying without their parent’s consent, owning businesses, owning property unless they were widows, and having part in politics. On the other hand, men had all of these privileges. Women have slowly been able to obtain their rights. In the first half of the twentieth century, there have been few incidents where women got to assume the same roles as men; for example, during World War I, the men of the world went off to war leaving their jobs unoccupied. The only resource for labor that countries could rely on was women; therefore, women were granted jobs that were presumably meant for males. Although they have been given more rights and equality, women still lack fairness in areas such as education, domestic abuse, crime, and lower class value. Cassandra Clifford states in her article “Are Girls still marginalized? Discrimination and Gender Inequality in Today’s Society”, “Woman and girls are abused by their husbands and fathers, young girls are exploited by sex tourism and trafficking, girls in many countries are forced into arranged marriages at early ages. Twice as many women are illiterate as men, due to the large gap in education, and girls are still less likely to get jobs and excel in the work place than boys.” She describes some of the issues that women face today around the world. These issues are what keep society from coming together to form a better world. Today, women have more rights than ever before, but the belief of male has resulted in a never ending convention toward women. This leads to predetermined thought from younger girls that they must become inferior. Clifford states in her article, “Children look first to their own parents for examples and inspiration, therefore when a child see their mother living a life of inequality, the cycle often continues as girls feel there is no alternative for themselves.” When younger girls see their mother or any woman submitting to the standard, they feel they must do the same. An effect on men is that they have to live up to the standard of the superior gender; if they do not meet the general criteria, their confidence may be destroyed or they may be ridiculed by other males. There are many times when a man feels inferior to a woman who has a more masculine job or has a better salary. For example, a man who is a nurse may feel inferior to a woman who is an engineer; society has developed a general routine that keeps the man on...
http://www.studymode.com/essays/Causes-And-Effects-Of-Gender-Inequality-1210229.html
4.0625
2012 marked a new record low for the extent of Arctic sea ice, but apparently that's not a problem. We can just refreeze it! Reducing carbon dioxide emissions is the key to a lasting solution to 'human-enhanced' climate change, however since governments and industries aren't doing a very good job of meeting reduction goals, strategies to reduce the worst effects of climate change may be needed. Dr. David Keith, a Canadian physicist, climate scientist and public policy expert who teaches at Harvard University, has done extensive research into the field of Solar Radiation Management, which involves different ways of reducing the amount of solar radiation that reaches the Earth's surface. The concept behind solar radiation management is fairly basic: introduce a substance into the environment that will reflect more sunlight back into space, and the resulting reduction in the amount of sunlight that reaches the surface will cause an immediate temperature drop in the affected region. One method of doing this involves spraying reflective aerosols — tiny drops of liquid about the same size as those that make up clouds, such as sulphur dioxide or titanium dioxide — into the stable stratosphere, where they can persist for years. Similar aerosols injected into any level of the troposphere (the lowest level of the atmosphere, where all weather happens) would quickly get caught up in the turbulent weather that we see every day and would not last long enough to help reduce incoming sunlight. Would this really work? Studying the effects of volcanic eruptions (which is where they got the idea from in the first place) and using computer model simulations have given scientists plenty of evidence that it will. Some approaches to solar radiation management have tried to deal with the situation on a global scale, with talk of releasing a million tons of sulphur dioxide into the stratosphere to lower the temperature around the world. However, these ideas have come under criticism, because of the potential for unforeseen consequences. For example, it has been suggested that introducing sulphur dioxide into the stratosphere could destroy the Earth's protective ozone layer, exposing us to dangerous ultraviolet radiation from the Sun. Dr. Keith and his colleagues suggest that much better results could be achieved, with a minimum of risk, by only using solar radiation management on a regional scale. Therefore, rather than spread the reflective substance across the entire stratosphere, we would only use it over the area that needed it. They used a selected climate model to simulate these regional changes, compared to a uniform global change, and according to CalTech News, "it took five times less solar reduction than in the uniform reflectance models to recover the Arctic sea ice to the extent typical of pre-Industrial years." Injecting just five metric tons of these reflective aerosols into the Arctic stratosphere could lower solar radiation levels over the Arctic Ocean enough to refreeze it and allow it to remain frozen. Before you get too alarmed by that five metric tons, the latest official figures from the US EPA show that in 1999, industry released over 17 million metric tons of sulphur dioxide into the troposphere. There are down-sides to the plan, of course. Likely no surprise to anyone, it is going to cost money. Compared to how much the effects of climate change are projected to cost us, or what the costs of reducing emissions will be, though, it is a drop in the bucket. Dr. Keith, along with Justin McClellan, from the Aurora Flight Science Corporation in Cambridge, Massachusetts, and Jay Apt, from Carnegie Mellon University's Tepper School of Business and Department of Engineering and Public Policy, published a cost-analysis report in the journal Environmental Research Letters, in August of this year. Their report states that the technology to deliver these materials to the right altitude and location already exist, and by modifying existing aircraft to act as the delivery method, the entire effort of running the program would cost between $5-8 billion per year (depending on the method of delivery), with the majority of that cost going towards buying or producing the sulphur dioxide itself. According to the same report (referencing from the 2007 IPCC report) "the costs of climate damages or of emission mitigation are commonly estimated to be 0.2—2.5% of 2030 global GDP... equivalent to roughly $200B to $2000B per year. Our estimates of the cost of delivering mass to the stratosphere — likely to be the most substantial part of the cost of SRM deployment — are less than 1% of this figure." [ Related: Arctic's record melt worries scientists ] So, we can do this, and compared to the alternatives, it is fairly cost effective. However, is this something we should be doing? From the standpoint of the effect of having sea ice as opposed to not having sea ice, we should choose to have the sea ice. Without it, global temperatures will rise even faster than they are now. When the sea ice is there, it reflects back solar radiation into space and limits the amount of warming there is of the planet. Take that sea ice away and the darker water absorbs a large percentage of the incoming solar radiation. This will not only contribute to more melting of sea ice, but will give a generally warmer atmosphere and as the water warms it will expand, causing further rises in sea level. There is the risk of destroying the stratospheric ozone layer, especially if these reflective aerosols get into the Antarctic stratospheric clouds that accumulate during the winter, which are the primary cause of the Antarctic ozone hole. These chemicals, in higher concentrations, would enhance the destruction of ozone and make the ozone hole even larger. However, using a regional scale approach would allow us to limit the concentrations of the aerosols, and thus limit the damage they cause. There's one other problem with this idea, though — a general tendency towards quick fixes. Peter Mooney, with Ottawa's Etc Group, which monitors the effects of technology and corporate strategies on society and the environment told The National Post, "It's naive to think that once [solar radiation management] becomes a political option that governments won't just take it on and interpret it as they wish. They will always find scientists who will give them the spin that they want." [ Related: 5 ways rapid warming is changing the Arctic ] "[We shouldn't be] opening up the back door for politicians to creep out of, claiming that, 'Don't worry folks. We don't need to do anything because we have technological fixes that we can deploy on short notice.'"
https://ca.news.yahoo.com/blogs/geekquinox/record-loss-arctic-sea-ice-no-problem-just-155430301.html
4
Extracellular Fluid—The “Internal Environment” About 60 per cent of the adult human body is fluid, mainly a water solution of ions and other substances. Although most of this fluid is inside the cells and is called intracellular fluid, about one third is in the spaces outside the cells and is called extracellular fluid. This extracellular fluid is in constant motion throughout the body. It is transported rapidly in the circulating blood and then mixed between the blood and the tissue fluids by diffusion through the capillary walls. In the extracellular fluid are the ions and nutrients needed by the cells to maintain cell life. Thus, all cells live in essentially the same environment—the extracellular fluid. For this reason, the extracellular fluid is also called the internal environment of the body, or the milieu intérieur, a term introduced more than 100 years ago by the great 19th-century French physiologist Claude Bernard. Cells are capable of living, growing, and performing their special functions as long as the proper concentrations of oxygen, glucose, different ions, amino acids, fatty substances, and other constituents are available in this internal environment. Differences Between Extracellular and Intracellular Fluids. The extracellular fluid contains large amounts of sodium, chloride, and bicarbonate ions plus nutrients for the cells, such as oxygen, glucose, fatty acids, and amino acids. It also contains carbon dioxide that is being transported from the cells to the lungs to be excreted, plus other cellular waste products that are being transported to the kidneys for excretion. The intracellular fluid differs significantly from the extracellular fluid; specifically, it contains large amounts of potassium, magnesium, and phosphate ions instead of the sodium and chloride ions found in the extracellular fluid. Special mechanisms for transporting ions through the cell membranes maintain the ion concentration differences between the extracellular and intracellular fluids. Extracellular Fluid Transport and Mixing System—The Blood Circulatory System Extracellular fluid is transported through all parts of the body in two stages. The first stage is movement of blood through the body in the blood vessels, and the second is movement of fluid between the blood capillaries and the intercellular spaces between the tissue cells. Figure 1–1 shows the overall circulation of blood. All the blood in the circulation traverses the entire circulatory circuit an average of once each minute when the body is at rest and as many as six times each minute when a person is extremely active. As blood passes through the blood capillaries, continual exchange of extracellular fluid also occurs between the plasma portion of the blood and the interstitial fluid that fills the intercellular spaces. This process is shown in Figure 1–2. The walls of the capillaries are permeable to most molecules in the plasma of the blood, with the exception of the large plasma protein molecules. Therefore, large amounts of fluid and its dissolved constituents diffuse back and forth between the blood and the tissue spaces, as shown by the arrows. This process of diffusion is caused by kinetic motion of the molecules in both the plasma and the interstitial fluid. That is, the fluid and dissolved molecules are continually moving and bouncing in all directions within the plasma and the fluid in the intercellular spaces, and also through the capillary pores. Few cells are located more than 50 micrometers from a capillary, which ensures diffusion of almost any substance from the capillary to the cell within a few seconds.Thus, the extracellular fluid everywhere in the body—both that of the plasma and that of the interstitial fluid—is continually being mixed, thereby maintaining almost complete homogeneity of the extracellular fluid throughout the body. Origin of Nutrients in the Extracellular Fluid Respiratory System. Figure 1–1 shows that each time the blood passes through the body, it also flows through the lungs. The blood picks up oxygen in the alveoli, thus acquiring the oxygen needed by the cells. The membrane between the alveoli and the lumen of the pulmonary capillaries, the alveolar membrane, is only 0.4 to 2.0 micrometers thick, and oxygen diffuses by molecular motion through the pores of this membrane into the blood in the same manner that water and ions diffuse through walls of the tissue capillaries. Gastrointestinal Tract. A large portion of the blood pumped by the heart also passes through the walls of the gastrointestinal tract. Here different dissolved nutrients, including carbohydrates, fatty acids, and amino acids, are absorbed from the ingested food into the extracellular fluid of the blood. Liver and Other Organs That Perform Primarily Metabolic Functions. Not all substances absorbed from the gastrointestinal tract can be used in their absorbed form by the cells. The liver changes the chemical compositions of many of these substances to more usable forms, and other tissues of the body—fat cells, gastrointestinal mucosa, kidneys, and endocrine glands—help modify the absorbed substances or store them until they are needed. Musculoskeletal System. Sometimes the question is asked, How does the musculoskeletal system fit into the homeostatic functions of the body? The answer is obvious and simple: Were it not for the muscles, the body could not move to the appropriate place at the appropriate time to obtain the foods required for nutrition. The musculoskeletal system also provides motility for protection against adverse surroundings, without which the entire body, along with its homeostatic mechanisms, could be destroyed instantaneously. Removal of Metabolic End Products Removal of Carbon Dioxide by the Lungs. At the same time that blood picks up oxygen in the lungs, carbon dioxide is released from the blood into the lung alveoli; the respiratory movement of air into and out of the lungs carries the carbon dioxide to the atmosphere. Carbon dioxide is the most abundant of all the end products of metabolism. Kidneys. Passage of the blood through the kidneys removes from the plasma most of the other substances besides carbon dioxide that are not needed by the cells. These substances include different end products of cellular metabolism, such as urea and uric acid; they also include excesses of ions and water from the food that might have accumulated in the extracellular fluid. The kidneys perform their function by first filtering large quantities of plasma through the glomeruli into the tubules and then reabsorbing into the blood those substances needed by the body, such as glucose, amino acids, appropriate amounts of water, and many of the ions. Most of the other substances that are not needed by the body, especially the metabolic end products such as urea, are reabsorbed poorly and pass through the renal tubules into the urine. The concept of homeostasis was first articulated by the French scientist Claude Bernard (1813-1878) in his studies of the maintenance of stability in the &quot;milieu interior.&quot; He said, &quot;All the vital mechanisms, varied as they are, have only one object, that of preserving constant the conditions of life in the internal environment&quot; (from Leçons sur les Phénonèmes de la Vie Commune aux Animaux et aux Végétaux , 1879). The term itself was coined by American physiologist Walter Cannon, author of The Wisdom of the Body (1932). The word comes from the Greek homoios (same, like, resembling) and stasis (to stand, posture). a schematic of homeostasis. Changes in the environment are transduced to cause a change in the level of a regulated substance. This change is detected through measurement and comparison with a coded set-point value. Disparities between the measured value and the set-point value regulate a response mechanism that directly or indirectly influences effector systems at the exterior–interior interface. Homeostatic systems often require fuel, other support mechanisms and interact with other systems. What is Homeostasis? Homeostasis in a general sense refers to stability, balance or equilibrium. Maintaining a stable internal environment requires constant monitoring and adjustments as conditions change. This adjusting of physiological systems within the body is called homeostatic regulation. Homeostatic regulation involves three parts or mechanisms: 1) the receptor , 2) the control center and 3) the effector . The receptor receives information that something in the environment is changing. The control center or integration center receives and processes information from the receptor . And lastly, the effector responds to the commands of the control center by either opposing or enhancing the stimulus. A metaphor to help us understand this process is the operation of a thermostat. The thermostat monitors and controls room temperature. The thermostat is set at a certain temperature that is considered ideal, the set point . The function of the thermostat is to keep the temperature in the room within a few degrees of the set point . If the room is colder than the set point , the thermostat receives information from the thermometer (the receptor ) that it is too cold. The effectors within the thermostat then will turn on the heat to warm up the room. When the room temperature reaches the set point , the receptor receives the information, and the thermostat &quot;tells&quot; the heater to turn off. This also works when it is too hot in the room. The thermostat receives the information and turns on the air conditioner. When the set point temperature is reached, the thermostat turns off the air conditioner. Our bodies control body temperature in a similar way. The brain is the control center, the receptor is our body's temperature sensors, and the effector is our blood vessels and sweat glands in our skin. When we feel heat, the temperature sensors in our skin send the message to our brain. Our brain then sends the message to the sweat glands to increase sweating and increase blood flow to our skin. When we feel cold, the opposite happens. Our brain sends a message to our sweat glands to decrease sweating, decrease blood flow, and begin shivering. This is an ongoing process that continually works to restore and maintain homeostasis. Because the internal and external environment of the body are constantly changing and adjustments must be made continuously to stay at or near the set point, homeostasis can be thought of as a dynamic equilibrium. Positive and Negative Feedback When a change of variable occurs, there are two main types of feedback to which the system reacts: • Negative feedback : a reaction in which the system responds in such a way as to reverse the direction of change. Since this tends to keep things constant, it allows the maintenance of homeostasis. For instance, when the concentration of carbon dioxide in the human body increases, the lungs are signaled to increase their activity and expel more carbon dioxide. Thermoregulation is another example of negative feedback. When body temperature rises (or falls), receptors in the skin and the hypothalamus sense a change, triggering a command from the brain. This command, in turn, effects the correct response, in this case a decrease in body temperature. • Home Heating System Vs. Negative Feedback: When you are home, you set your thermostat to a desired temperature. Let's say today you set it at 70 degrees. The thermometer in the thermostat waits to sense a temperature change either too high above or too far below the 70 degree set point. When this change happens the thermometer will send a message to the &quot;Control Center&quot;, or thermostat, Which in turn will then send a message to the furnace to either shut off if the temperature is too high or kick back on if the temperature is too low. In the home-heating example the air temperature is the &quot;NEGATIVE FEEDBACK.&quot; When the Control Center receives negative feedback it triggers a chain reaction in order to maintain room temperature. • Positive feedback : a response is to amplify the change in the variable. This has a destabilizing effect, so does not result in homeostasis. Positive feedback is less common in naturally occurring systems than negative feedback, but it has its applications. For example, in nerves, a threshold electric potential triggers the generation of a much larger action potential. Blood clotting and events in childbirth are other types of positive feedback. ' *Harmful Positive Feedback' Although Positive Feedback is needed within Homeostasis it also can be harmful at times. When you have a high fever it causes a metabolic change that can push the fever higher and higher. In rare occurrences the the body temperature reaches 113 degrees the cellular proteins stop working and the metabolism stops, resulting in death. Summary: Sustainable systems require combinations of both kinds of feedback. Generally with the recognition of divergence from the homeostatic condition, positive feedbacks are called into play, whereas once the homeostatic condition is approached, negative feedback is used for &quot;fine tuning&quot; responses. This creates a situation of &quot;metastability,&quot; in which homeostatic conditions are maintained within fixed limits, but once these limits are exceeded, the system can shift wildly to a wholly new (and possibly less desirable) situation of homeostasis. Homeostatic systems have several properties • They are ultra-stable, meaning the system is capable of testing which way its variables should be adjusted. • Their whole organization (internal, structural, and functional) contributes to the • Physiology is largely a study of processes related to homeostasis. Some of the functions you will learn about in this book are not specifically about homeostasis (e.g. how muscles contract), but in order for all bodily processes to function there must be a suitable internal environment. Homeostasis is, therefore, a fitting framework for the introductory study of physiology. Pathways That Alter Homeostasis A variety of homeostatic mechanisms maintain the internal environment within tolerable limits. Either homeostasis is maintained through a series of control mechanisms, or the body suffers various illnesses or disease. When the cells in your body begin to malfunction, the homeostatic balance becomes disrupted. Eventually this leads to disease or cell malfunction. Disease and cellular malfunction can be caused in two basic ways: either, deficiency (cells not getting all they need) or toxicity (cells being poisoned by things they do not need). When homeostasis is interrupted in your cells, there are pathways to correct or worsen the problem. In addition to the internal control mechanisms, there are external influences based primarily on lifestyle choices and environmental exposures that influence our body's ability to maintain cellular health. • Nutrition: If your diet is lacking in a specific vitamin or mineral your cells will function poorly, possibly resulting in a disease condition. For example, a menstruating woman with inadequate dietary intake of iron will become anemic. Lack of hemoglobin, a molecule that requires iron, will result in reduced oxygen-carrying capacity. In mild cases symptoms may be vague (e.g. fatigue), but if the anemia is severe the body will try to compensate by increasing cardiac output, leading to palpitations and sweatiness, and possibly to heart failure. • Toxins: Any substance that interferes with cellular function, causing cellular malfunction. This is done through a variety of ways; chemical, plant, insecticides, and or bites. A commonly seen example of this is drug overdoses. When a person takes too much of a drug their vital signs begin to waver; either increasing or decreasing, these vital signs can cause problems including coma, brain damage and even death. • Psychological: Your physical health and mental health are inseparable. Our thoughts and emotions cause chemical changes to take place either for better as with meditation, or worse as with stress. • Physical: Physical maintenance is essential for our cells and bodies. Adequate rest, sunlight, and exercise are examples of physical mechanisms for influencing homeostasis. Lack of sleep is related to a number of ailments such as irregular cardiac rhythms, fatigue, anxiety and headaches. • Genetic/Reproductive: Inheriting strengths and weaknesses can be part of our genetic makeup. Genes are sometimes turned off or on due to external factors which we can have some control over, but at other times little can be done to correct or improve genetic diseases. Beginning at the cellular level a variety of diseases come from mutated genes. For example, cancer can be genetically inherited or can be caused due to a mutation from an external source such as radiation or genes altered in a fetus when the mother uses drugs. • Medical: Because of genetic differences some bodies need help in gaining or maintaining homeostasis. Through modern medicine our bodies can be given different aids -from anti-bodies to help fight infections or chemotherapy to kill harmful cancer cells. Traditional and alternative medical practices have many benefits, but the potential for harmful effects is also present. Whether by nosocomial infections, or wrong dosage of medication, homeostasis can be altered by that which is trying to fix it. Trial and error with medications can cause potential harmful reactions and possibly death if not caught soon enough. The factors listed above all have their effects at the cellular level, whether harmful or beneficial. Inadequate beneficial pathways (deficiency) will almost always result in a harmful waiver in homeostasis. Too much toxicity also causes homeostatic imbalance, resulting in cellular malfunction. By removing negative health influences, and providing adequate positive health influences, your body is better able to self-regulate and self-repair, thus maintaining homeostasis. Control Systems of the Body The human body has thousands of control systems in it. The most intricate of these are the genetic control systems that operate in all cells to help control intracellular function as well as extracellular function. This subject is discussed in Chapter 3. Many other control systems operate within the organs to control functions of the individual parts of the organs; others operate throughout the entire body to control the interrelations between the organs. For instance, the respiratory system, operating in association with the nervous system, regulates the concentration of carbon dioxide in the extracellular fluid. The liver and pancreas regulate the concentration of glucose in the extracellular fluid, and the kidneys regulate concentrations of hydrogen, sodium, potassium, phosphate, and other ions in the extracellular fluid. Examples of Control Mechanisms Regulation of Oxygen and Carbon Dioxide Concentrations in the Extracellular Fluid. Because oxygen is one of the major substances required for chemical reactions in the cells, it is fortunate that the body has a special control mechanism to maintain an almost exact and constant oxygen concentration in the extracellular fluid. This mechanism depends principally on the chemical characteristics of hemoglobin, which is present in all red blood cells. Hemoglobin combines with oxygen as the blood passes through the lungs. Then, as the blood passes through the tissue capillaries, hemoglobin, because of its own strong chemical affinity for oxygen, does not release oxygen into the tissue fluid if too much oxygen is already there. But if the oxygen concentration in the tissue fluid is too low, sufficient oxygen is released to re-establish an adequate concentration. Thus, regulation of oxygen concentration in the tissues is vested principally in the chemical characteristics of hemoglobin itself. This regulation is called the oxygen-buffering function of hemoglobin. Carbon dioxide concentration in the extracellular fluid is regulated in a much different way. Carbon dioxide is a major end product of the oxidative reactions in cells. If all the carbon dioxide formed in the cells continued to accumulate in the tissue fluids, the mass action of the carbon dioxide itself would soon halt all energy-giving reactions of the cells. Fortunately, a higher than normal carbon dioxide concentration in the blood excites the respiratory center, causing a person to breathe rapidly and deeply. This increases expiration of carbon dioxide and, therefore, removes excess carbon dioxide from the blood and tissue fluids. This process continues until the concentration returns to normal. Regulation of Arterial Blood Pressure. Several systems contribute to the regulation of arterial blood pressure. One of these, the baroreceptor system, is a simple and excellent example of a rapidly acting control mechanism. In the walls of the bifurcation region of the carotid arteries in the neck, and also in the arch of the aorta in the thorax, are many nerve receptors called baroreceptors, which are stimulated by stretch of the arterial wall.When the arterial pressure rises too high, the baroreceptors send barrages of nerve impulses to the medulla of the brain. Here these impulses inhibit the vasomotor center, which in turn decreases the number of impulses transmitted from the vasomotor center through the sympathetic nervous system to the heart and blood vessels. Lack of these impulses causes diminished pumping activity by the heart and also dilation of the peripheral blood vessels, allowing increased blood flow through the vessels. Both of these effects decrease the arterial pressure back toward normal. Conversely, a decrease in arterial pressure below normal relaxes the stretch receptors, allowing the vasomotor center to become more active than usual, thereby causing vasoconstriction and increased heart pumping, and raising arterial pressure back toward normal. Normal Ranges and Physical Characteristics of Important Extracellular Fluid Constituents Table 1–1 lists the more important constituents and physical characteristics of extracellular fluid, along with their normal values, normal ranges, and maximum limits without causing death. Note the narrowness of the normal range for each one. Values outside these ranges are usually caused by illness. Most important are the limits beyond which abnormalities can cause death. For example, an increase in the body temperature of only 11°F (7°C) above normal can lead to a vicious cycle of increasing cellular metabolism that destroys the cells. Note also the narrow range for acid-base balance in the body, with a normal pH value of 7.4 and lethal values only about 0.5 on either side of normal. Another important factor is the potassium ion concentration, because whenever it decreases to less than one third normal, a person is likely to be paralyzed as a result of the nerves’ inability to carry signals. Alternatively, if the potassium ion concentration increases to two or more times normal, the heart muscle is likely to be severely depressed. Also, when the calcium ion concentration falls below about one half of normal, a person is likely to experience tetanic contraction of muscles throughout the body because of the spontaneous generation of excess nerve impulses in the peripheral nerves. When the glucose concentration falls below one half of normal, a person frequently develops extreme mental irritability and sometimes even convulsions. These examples should give one an appreciation for the extreme value and even the necessity of the vast numbers of control systems that keep the body operating in health; in the absence of any one of these controls, serious body malfunction or death can result. Characteristics of Control Systems The aforementioned examples of homeostatic control mechanisms are only a few of the many thousands in the body, all of which have certain characteristics in common. These characteristics are explained in this section. Negative Feedback Nature of Most Control Systems Most control systems of the body act by negative feedback, which can best be explained by reviewing some of the homeostatic control systems mentioned previously. In the regulation of carbon dioxide concentration, a high concentration of carbon dioxide in the extracellular fluid increases pulmonary ventilation. This, in turn, decreases the extracellular fluid carbon dioxide concentration because the lungs expire greater amounts of carbon dioxide from the body. In other words, the high concentration of carbon dioxide initiates events that decrease the concentration toward normal, which is negative to the initiating stimulus. Conversely, if the carbon dioxide concentration falls too low, this causes feedback to increase the concentration. This response also is negative to the initiating stimulus. In the arterial pressure–regulating mechanisms, a high pressure causes a series of reactions that promote a lowered pressure, or a low pressure causes a series of reactions that promote an elevated pressure. In both instances, these effects are negative with respect to the initiating stimulus. Therefore, in general, if some factor becomes excessive or deficient, a control system initiates negative feedback, which consists of a series of changes that return the factor toward a certain mean value, thus maintaining homeostasis. “ Gain” of a Control System. The degree of effectiveness with which a control system maintains constant conditions is determined by the gain of the negative feedback. For instance, let us assume that a large volume of blood is transfused into a person whose baroreceptor pressure control system is not functioning, and the arterial pressure rises from the normal level of 100 mm Hg up to 175 mm Hg. Then, let us assume that the same volume of blood is injected into the same person when the baroreceptor system is functioning, and this time the pressure increases only 25 mm Hg. Thus, the feedback control system has caused a “correction” of –50 mm Hg—that is, from 175 mm Hg to 125 mm Hg. There remains an increase in pressure of +25 mm Hg, called the “error,” which means that the control system is not 100 per cent effective in preventing change. The gain of the system is then calculated by the following formula: Thus, in the baroreceptor system example, the correction is –50 mm Hg and the error persisting is +25 mm Hg. Therefore, the gain of the person’s baroreceptor system for control of arterial pressure is –50 divided by +25, or –2. That is, a disturbance that increases or decreases the arterial pressure does so only one third as much as would occur if this control system were not present. The gains of some other physiologic control systems are much greater than that of the baroreceptor system. For instance, the gain of the system controlling internal body temperature when a person is exposed to moderately cold weather is about –33. Therefore, one can see that the temperature control system is much more effective than the baroreceptor pressure control system. Positive Feedback Can Sometimes Cause Vicious Cycles and Death One might ask the question, Why do essentially all control systems of the body operate by negative feedback rather than positive feedback? If one considers the nature of positive feedback, one immediately sees that positive feedback does not lead to stability but to instability and often death. Figure 1–3 shows an example in which death can ensue from positive feedback. This figure depicts the pumping effectiveness of the heart, showing that the heart of a healthy human being pumps about 5 liters of blood per minute. If the person is suddenly bled 2 liters, the amount of blood in the body is decreased to such a low level that not enough blood is available for the heart to pump effectively. As a result, the arterial pressure falls, and the flow of blood to the heart muscle through the coronary vessels diminishes. This results in weakening of the heart, further diminished pumping, a further decrease in coronary blood flow, and still more weakness of the heart; the cycle repeats itself again and again until death occurs. Note that each cycle in the feedback results in further weakening of the heart. In other words, the initiating stimulus causes more of the same, which is positive feedback. Gain = Correction/Error Positive feedback is better known as a “vicious cycle,” but a mild degree of positive feedback can be overcome by the negative feedback control mechanisms of the body, and the vicious cycle fails to develop. For instance, if the person in the aforementioned example were bled only 1 liter instead of 2 liters, the normal negative feedback mechanisms for controlling cardiac output and arterial pressure would overbalance the positive feedback and the person would recover, as shown by the dashed curve of Figure 1–3. Positive Feedback Can Sometimes Be Useful. In some instances, the body uses positive feedback to its advantage. Blood clotting is an example of a valuable use of positive feedback. When a blood vessel is ruptured and a clot begins to form, multiple enzymes called clotting factors are activated within the clot itself. Some of these enzymes act on other unactivated enzymes of the immediately adjacent blood, thus causing more blood clotting. This process continues until the hole in the vessel is plugged and bleeding no longer occurs. On occasion, this mechanism can get out of hand and cause the formation of unwanted clots. In fact, this is what initiates most acute heart attacks, which are caused by a clot beginning on the inside surface of an atherosclerotic plaque in a coronary artery and then growing until the artery is blocked. Childbirth is another instance in which positive feedback plays a valuable role. When uterine contractions become strong enough for the baby’s head to begin pushing through the cervix, stretch of the cervix sends signals through the uterine muscle back to the body of the uterus, causing even more powerful contractions. Thus, the uterine contractions stretch the cervix, and the cervical stretch causes stronger contractions. When this process becomes powerful enough, the baby is born. If it is not powerful enough, the contractions usually die out, and a few days pass before they begin again. Another important use of positive feedback is for the generation of nerve signals. That is, when the membrane of a nerve fiber is stimulated, this causes slight leakage of sodium ions through sodium channels in the nerve membrane to the fiber’s interior. The sodium ions entering the fiber then change the membrane potential, which in turn causes more opening of channels, more change of potential, still more opening of channels, and so forth. Thus, a slight leak becomes an explosion of sodium entering the interior of the nerve fiber, which creates the nerve action potential. This action potential in turn causes electrical current to flow along both the outside and the inside of the fiber and initiates additional action potentials. This process continues again and again until the nerve signal goes all the way to the end of the fiber. In each case in which positive feedback is useful, the positive feedback itself is part of an overall negative feedback process. For example, in the case of blood clotting, the positive feedback clotting process is a negative feedback process for maintenance of normal blood volume. Also, the positive feedback that causes nerve signals allows the nerves to participate in thousands of negative feedback nervous control systems. More Complex Types of Control Systems— Adaptive Control Later in this text, when we study the nervous system, we shall see that this system contains great numbers of interconnected control mechanisms. Some are simple feedback systems similar to those already discussed. Many are not. For instance, some movements of the body occur so rapidly that there is not enough time for nerve signals to travel from the peripheral parts of the body all the way to the brain and then back to the periphery again to control the movement. Therefore, the brain uses a principle called feed-forward control to cause required muscle contractions. That is, sensory nerve signals from the moving parts apprise the brain whether the movement is performed correctly. If not, the brain corrects the feed-forward signals that it sends to the muscles the next time the movement is required. Then, if still further correction is needed, this will be done again for subsequent movements. This is called adaptive control. Adaptive control, in a sense, is delayed negative feedback. Thus, one can see how complex the feedback control systems of the body can be. A person’s life depends on all of them. Therefore, a major share of this text is devoted to discussing these life-giving mechanisms. Summary—Automaticity of the Body The purpose of this chapter has been to point out, first, the overall organization of the body and, second, the means by which the different parts of the body operate in harmony.To summarize, the body is actually a social order of about 100 trillion cells organized into different functional structures, some of which are called organs. Each functional structure contributes its share to the maintenance of homeostatic conditions in the extracellular fluid, which is called the internal environment. As long as normal conditions are maintained in this internal environment, the cells of the body continue to live and function properly. Each cell benefits from homeostasis, and in turn, each cell contributes its share toward the maintenance of homeostasis. This reciprocal interplay provides continuous automaticity of the body until one or more functional systems lose their ability to contribute their share of function.When this happens, all the cells of the body suffer. Extreme dysfunction leads to death; moderate dysfunction leads to sickness. Figure 49-3 Sympathetic and parasympathetic divisions of the autonomic nervous system. Sympathetic preganglionic neurons are clustered in ganglia in the sympathetic chain alongside the spinal cord extending from the first thoracic spinal segment to upper lumbar segments. Parasympathetic preganglionic neurons are located within the brain stem and in segments S2-S4 of the spinal cord. The major targets of autonomic control are shown here. The Autonomic Nervous System and the Hypothalamus Susan Iversen Leslie Iversen Clifford B. Saper WHEN WE ARE FRIGHTENED our heart races, our breathing becomes rapid and shallow, our mouth becomes dry, our muscles tense, our palms become sweaty, and we may want to run. These bodily changes are mediated by the autonomic nervous system , which controls heart muscle, smooth muscle, and exocrine glands. The autonomic nervous system is distinct from the somatic nervous system , which controls skeletal muscle. As we shall learn in the next chapter, even though the neural control of emotion involves several regions, including the amygdala and the limbic association areas of the cerebral cortex, they all work through the hypothalamus to control the autonomic nervous system. The hypothalamus coordinates behavioral response to insure bodily homeostasis , the constancy of the internal environment. The hypothalamus, in turn, acts on three major systems: the autonomic nervous system, the endocrine system, and an ill-defined neural system concerned with motivation. In this chapter we shall first examine the autonomic nervous system and then go on to consider the hypothalamus. In the next two chapters, we shall examine emotion and motivation, behavioral states that depend greatly on autonomic and hypothalamic mechanisms. The Autonomic Nervous System Is a Visceral and Largely Involuntary Sensory and Motor System In contrast to the somatic sensory and motor systems, which we considered in Parts IV and V of this book, the autonomic nervous system is a visceral sensory and motor system. Virtually all visceral reflexes are mediated by local circuits in the brain stem or spinal cord. Although these reflexes are regulated by a network of central autonomic control nuclei in the brain stem, hypothalamus, and forebrain, these visceral reflexes are not under voluntary control, nor do they impinge on consciousness, with few exceptions. The autonomic nervous system is thus also referred to as the involuntary motor system, in contrast to the voluntary (somatic) motor system. The autonomic nervous system has three major divisions: sympathetic, parasympathetic, and enteric. The sympathetic and parasympathetic divisions innervate cardiac muscle, smooth muscle, and glandular tissues and mediate a variety of visceral reflexes. These two divisions include the sensory neurons associated with spinal and cranial nerves, the preganglionic and postganglionic motor neurons, and the central nervous system circuitry that connects with and modulates the sensory and motor neurons. The enteric division has greater autonomy than the other two divisions and comprises a largely self-contained system, with only minimal connections to the rest of the nervous system. It consists of sensory and motor neurons in the gastrointestinal tract that mediate digestive reflexes. The American physiologist Walter B. Cannon first proposed that the sympathetic and parasympathetic divisions have distinctly different functions. He argued that the parasympathetic nervous system is responsible for rest and digest , maintaining basal heart rate, respiration, and metabolism under normal conditions. The sympathetic nervous system, on the other hand, governs the emergency reaction, or fight-or-flight reaction. In an emergency the body needs to respond to sudden changes in the external or internal environment, be it emotional stress, combat, athletic competition, severe change in temperature, or blood loss. For a person to respond effectively, the sympathetic nervous system increases output to the heart and other viscera, the peripheral vasculature and sweat glands, and the piloerector and certain ocular muscles. An animal whose sympathetic nervous system has been experimentally eliminated can only survive if sheltered, kept warm, and not exposed to stress or emotional stimuli. Such an animal cannot, however, carry out strenuous work or fend for itself; it cannot mobilize blood sugar from the liver quickly and does not react to cold with normal vasoconstriction or elevation of body heat. The relationship between the sympathetic and parasympathetic pathways is not as simple and as independent as suggested by Cannon, however. Both divisions are tonically active and operate in conjunction with each other and with the somatic motor system to regulate most behavior, be it normal or emergency. Although several visceral functions are controlled predominantly by one or the other division, and although both the sympathetic and parasympathetic divisions often exert opposing effects on innervated target tissues, it is the balance of activity between the two that helps maintain an internal stable environment in the face of changing external conditions. The idea of a stable internal environment in the face of changing external conditions was first proposed in the nineteenth century by the French physiologist Claude Bernard. This idea was developed further by Cannon, who put forward the concept of homeostasis as the complex physiological mechanisms that maintain the internal milieu. In his classic book The Wisdom of the Body published in 1932, Cannon introduced the concept of negative feedback regulation as a key homeostatic mechanism and outlined much of our current understanding of the functions of the autonomic nervous system. If a state remains steady, it does so because any change is automatically met by increased effectiveness of the factor or factors that resist the change. Consider, for example, thirst when the body lacks water; the discharge of adrenaline, which liberates sugar from the liver when the concentration of sugar in the blood falls below a critical point; and increased breathing, which reduces carbonic acid when the blood tends to shift toward acidity. Cannon further proposed that the autonomic nervous system, under the control of the hypothalamus, is an important part of this feedback regulation. The hypothalamus regulates many of the neural circuits that mediate the peripheral components of emotional states: changes in heart rate, blood pressure, temperature, and water and food intake. It also controls the pituitary gland and thereby regulates the endocrine system. The Visceral Motor System Overview The visceral (or autonomic) motor system controls involuntary functions mediated by the activity of smooth muscle fibers, cardiac muscle fibers, and glands. The system comprises two major divisions, the sympathetic and parasympathetic subsystems (the specialized innervation of the gut provides a further semi-independent component and is usually referred to as the enteric nervous system). Although these divisions are always active at some level, the sympatheticsystem mobilizes the body's resources for dealing with challenges of one sort or another. Conversely, parasympathetic system activity predominates during states of relative quiescence, so that energy sources previously expended can be restored. This continuous neural regulation of the expenditure and replenishment of the body's resources contributes importantly to the overall physiological balance of bodily functions called homeostasis. Whereas the major controlling centers for somatic motor activity are the primary and secondary motor cortices in the frontal lobes and a variety of related brainstem nuclei, the major locus of central control in the visceral motor system is the hypothalamus and the complex (and ill-defined) circuitry that it controls in the brainstem tegmentum and spinal cord. The status of both divisions of the visceral motor system is modulated by descending pathways from these centers to preganglionic neurons in the brainstem and spinal cord, which in turn determine the activity of the primary visceral motor neurons in autonomic ganglia. The autonomic regulation of several organ systems of particular importance in clinical practice (including cardiovascular function, control of the bladder, and the governance of the reproductive organs) is considered in more detail as specific examples of visceral motor control. Early Studies of the Visceral Motor System Although humans must always have been aware of involuntary motor reactions to stimuli in the environment (e.g., narrowing of the pupil in response to bright light, constriction of superficial blood vessels in response to cold or fear, increased heart rate in response to exertion), it was not until the late nineteenth century that the neural control of these and other visceral functions came to be understood in modern terms. The researchers who first rationalized the workings of the visceral motor system were Walter Gaskell and John Langley, two British physiologists at Cambridge University. Gaskell, whose work preceded that of Langley, established the overall anatomy of the system and carried out early physiological experiments that demonstrated some of its salient functional characteristics (e.g., that the heartbeat of an experimental animal is accelerated by stimulating the outflow of the upper thoracic spinal cord segments). Based on these and other observations, Gaskell concluded in 1866 that “every tissue is innervated by two sets of nerve fibers of opposite characters,” and he further surmised that these actions showed “the characteristic signs of opposite chemical processes.” Langley went on to establish the function of autonomic ganglia (which harbor the primary visceral motor neurons), defined the terms “preganglionic” and “postganglionic” (see next section), and coined the phrase autonomic nervous system (which is basically a synonym for “visceral motor system”; the terms are used interchangeably). Langley's work on the pharmacology of the autonomic system initiated the classical studies indicating the roles of acetylcholine and the catecholamines in autonomic function, and in neurotransmitter function more generally (see Chapter 6). In short, Langley's ingenious physiological and anatomical experiments established in detail the general proposition put forward by Gaskell on circumstantial grounds. The third major figure in the pioneering studies of the visceral motor system was Walter Cannon at Harvard Medical School, who during the early to mid-1900s devoted his career to understanding autonomic functions in relation to homeostatic mechanisms generally, and to the emotions and higher brain functions in particular (see Chapter 29). He also established the effects of denervation in the visceral motor system, laying some of the basis for much further work on what is now referred to as “neuronal plasticity” (see Chapter 25) Summary Sympathetic and parasympathetic ganglia, which contain the primary visceral-motor neurons that innervate smooth muscles, cardiac muscle, and glands, are controlled by preganglionic neurons in the spinal cord and brainstem. The sympathetic preganglionic neurons that govern ganglion cells in the sympatheticdivision of the visceral motor system arise from neurons in the thoracic and upper lumbar segments of the spinal cord; parasympathetic preganglionic neurons, in contrast, are located in the brainstem and sacral spinal cord. Sympathetic ganglion cells are distributed in the sympathetic chain (paravertebral) and prevertebral ganglia, whereas the parasympathetic motor neurons are more widely distributed in ganglia that lie within or near the organs they control. Most autonomic targets receive inputs from both the sympathetic and parasympathetic systems, which act in a generally antagonistic fashion. The diversity of autonomic functions is achieved primarily by different types of receptors for the two primary classes of postganglionic autonomic neurotransmitters, norepinephrine in the case of the sympathetic division and acetylcholine in the parasympathetic division. The visceral motor system is regulated by sensory feedback provided by dorsal root and cranial nerve sensory ganglion cells that make local reflex connections in the spinal cord or brainstem and project to the nucleus of the solitary tract in the brainstem, and by descending pathways from the hypothalamus and brainstem tegmentum, the major controlling centers of the visceral motor system (and of homeostasis more generally). The importance of the visceral motor control of organs such as the heart, bladder, and reproductive organs—and the many pharmacological means of modulating autonomic function—have made visceral motor control a central theme in clinical medicine. Figure 49-1 Anatomical organization of the somatic and autonomic motor pathways. A. In the somatic motor system, effector motor neurons in the central nervous system project directly to skeletal muscles. B. In the autonomic motor system, the effector motor neurons are located in ganglia outside the central nervous system and are controlled by preganglionic central neurons. The Motor Neurons of the Autonomic Nervous System Lie Outside the Central Nervous System In the somatic motor system the motor neurons are part of the central nervous system: They are located in the spinal cord and brain stem and project directly to skeletal muscle. In contrast, the motor neurons of the sympathetic and parasympathetic motor systems are located outside the spinal cord in the autonomic ganglia. The autonomic motor neurons (also known as postganglionic neurons ) are activated by the axons of central neurons (the preganglionic neurons ) whose cell bodies are located in the spinal cord or brain stem, much as are the somatic motor neurons. Thus, in the visceral motor system a synapse (in the autonomic ganglion) is interposed between the efferent neuron in the central nervous system and the peripheral target (Figure 49-1). The sympathetic and parasympathetic nervous systems have clearly defined sensory components that provide input to the central nervous system and play an important role in autonomic reflexes. In addition, some sensory fibers that project to the spinal cord also send a branch to autonomic ganglia, thus forming reflex circuits that control some visceral autonomic functions. The innervation of target tissues by autonomic nerves also differs markedly from that of skeletal muscle by somatic motor nerves. Unlike skeletal muscle, which has specialized postsynaptic regions (the end-plates; see Chapter 14), target cells of the autonomic nerve fibers have no specialized postsynaptic sites. Nor do the postganglionic nerve endings have presynaptic specializations such as the active zones of somatic motor neurons. Instead, the nerve endings have several swellings ( varicosities ) where vesicles containing transmitter substances accumulate (see Chapter 15). Synaptic transmission therefore occurs at multiple sites along the highly branched axon terminals of autonomic nerves. The neurotransmitter may diffuse for distances as great as several hundred nanometers to reach its targets. In contrast to the point-to-point contacts made in the somatic motor system, neurons in the autonomic motor system exert a more diffuse control over target tissues, so that a relatively small number of highly branched motor fibers can regulate the function of large masses of smooth muscle or glandular tissue. course of preganglionic and postganglionic sympathetic fibers innervating different organs. (A) Organs in the head. (B) Organs in the chest. (C) Organs in the abdomen. (D) Adrenal gland. Also note that, at each level, the axons of the postganglionic neurons in the paravertebral ganglia re-enter the corresponding spinal nerves through gray rami, travel within or along the spinal nerve, and innervate the blood vessels, sweat glands, and erectile muscle of hair follicles. Figure 21.2. Organization of the preganglionic spinal outflow to sympathetic ganglia. (A) General organization of the sympathetic division of the visceral motor system in the spinal cord and the preganglionic outflow to the sympathetic ganglia that contain the primary visceral motor neurons. (B) Cross section of thoracic spinal cord at the level indicated, showing location of the sympathetic preganglionic neurons in the intermediolateral cell column of the lateral horn. Sympathetic Pathways Convey Thoracolumbar Outputs to Ganglia Alongside the Spinal Cord Preganglionic sympathetic neurons form a column in the intermediolateral horn of the spinal cord extending from the first thoracic spinal segment to rostral lumbar segments. The axons of these neurons leave the spinal cord in the ventral root and initially run together in the spinal nerve. They then separate from the somatic motor axons and project (in small bundles called white myelinated rami ) to the ganglia of the sympathetic chains , which lie along each side of the spinal cord (Figure 49-2). Axons of preganglionic neurons exit the spinal cord at the level at which their cell bodies are located, but they may innervate sympathetic ganglia situated either more rostrally or more caudally by traveling in the sympathetic nerve trunk that connects the ganglia (Figure 49-2). Most of the preganglionic axons are relatively slow-conducting, small-diameter myelinated fibers. Each preganglionic fiber forms synapses with many postganglionic neurons in different ganglia. Overall, the ratio of preganglionic fibers to postganglionic fibers in the sympathetic nervous system is about 1:10. This divergence permits coordinated activity in sympathetic neurons at several different spinal levels. The axons of postganglionic neurons are largely unmyelinated and exit the ganglia in the gray unmyelinated rami. The postganglionic cells that innervate structures in the head are located in the superior cervical ganglion, which is a rostral extension of the sympathetic chain. The axons of these cells travel along branches of the carotid arteries to their targets in the head. The postganglionic fibers innervating the rest of the body travel in spinal nerves to their targets; in an average spinal nerve about 8% of the fibers are sympathetic postganglionic axons. Some neurons of the cervical and upper thoracic ganglia innervate cranial blood vessels, sweat glands, and hair follicles; others innervate the glands and visceral organs of the head and chest, including the lacrimal and salivary glands, heart, lungs, and blood vessels. Neurons in the lower thoracic and lumbar paravertebral ganglia innervate peripheral blood vessels, sweat glands, and pilomotor smooth muscle (Figure 49-3). Some preganglionic fibers pass through the sympathetic ganglia and branches of the splanchnic nerves to synapse on neurons of the prevertebral ganglia , which include the coeliac ganglion and the superior and inferior mesenteric ganglia (Figure 49-3). Neurons in these ganglia innervate the gastrointestinal system and the accessory gastrointestinal organs, including the pancreas and liver, and also provide sympathetic innervation of the kidneys, bladder, and genitalia. Another group of preganglionic axons runs in the thoracic splanchnic nerve into the abdomen and innervates the adrenal medulla, which is an endocrine gland, secreting both epinephrine and norepinephrine into circulation. The cells of the adrenal medulla are developmentally and functionally related to postganglionic sympathetic neurons. Figure 21.3. Organization of the preganglionic outflow to parasympathetic ganglia. (A) Dorsal view of brainstem showing the location of the nuclei of the cranial part of the parasympathetic division of the visceral motor system. (B) Cross section of the brainstem at the relevant levels [indicated by blue lines in (A)] showing location of these parasympathetic nuclei. (C) Main features of the parasympathetic preganglionics in the sacral segments of the spinal cord. (D) Cross section of the sacral spinal cord showing location of sacral preganglionic neurons Parasympathetic Pathways Convey Outputs From the Brain Stem Nuclei and Sacral Spinal Cord to Widely Dispersed Ganglia The central, preganglionic cells of the parasympathetic nervous system are located in several brain stem nuclei and in segments S2-S4 of the sacral spinal cord (Figure 49-3). The axons of these cells are quite long because parasympathetic ganglia lie close to or are actually embedded in visceral target organs. In contrast, sympathetic ganglia are located at some distance from their targets. The preganglionic parasympathetic nuclei in the brain stem include the Edinger-Westphal nucleus (associated with cranial nerve III), the superior and inferior salivary nuclei (associated with cranial nerves VII and IX, respectively), and the dorsal vagal nucleus and the nucleus ambiguus (both associated with cranial nerve X). Preganglionic axons exiting the brain stem through cranial nerves III, VII, and IX and project to postganglionic neurons in the ciliary, pterygopalatine, submandibular, and otic ganglia (Figure 49-3). Parasympathetic preganglionic fibers from the dorsal vagal nucleus project via nerve X to postganglionic neurons embedded in thoracic and abdominal targets—the stomach, liver, gall bladder, pancreas, and upper intestinal tract (Figure 49-3). Neurons of the ventrolateral nucleus ambiguus provide the principal parasympathetic innervation of the cardiac ganglia, which innervate the heart, esophagus, and respiratory airways. In the sacral spinal cord the parasympathetic preganglionic neurons occupy the intermediolateral column. Axons of spinal parasympathetic neurons leave the spinal cord through the ventral roots and project in the pelvic nerve to the pelvic ganglion plexus. Pelvic ganglion neurons innervate the descending colon, bladder, and external genitalia (Figure 49-3). The sympathetic nervous system innervates tissues throughout the body, but the parasympathetic distribution is more restricted. There is also less divergence, with an average ratio of preganglionic to postganglionic fibers of about 1:3; in some tissues the numbers may be nearly equal. Figure 21.5. Organization of sensory input to the visceral motor system. (A) Afferent input from the cranial nerves relevant to visceral sensation (as well as afferent input ascending from the spinal cord not shown here) converge on the nucleus of the solitary tract. (B) Cross section of the brainstem showing the location of the nucleus of the solitary tract, which is so-named because of its association with the tract of the myelinated axons that supply it. Sensory Components of the Visceral Motor System The visceral motor system clearly requires sensory feedback to control and modulate its many functions. As in the case of somatic sensory modalities (see Chapters 9 and 10 ), the cell bodies of the visceral afferent fibers lie in the dorsal root ganglia or the sensory ganglia associated with cranial nerves (in this case, the vagus, glossopharyngeal, and facial nerves) ( Figure 21.5A ). The neurons in the dorsal root ganglia send an axon peripherally to end in sensoryreceptor specializations, and an axon centrally to terminate in a part of the dorsal horn of the spinal cord near the lateral horn, where the preganglionic neurons of both sympathetic and parasympathetic divisions are located. In addition to making local reflex connections, branches of these visceral sensoryneurons also travel rostrally to innervate nerve cells in the brainstem; in this case, however, the target is the nucleus of the solitary tract in the upper medulla ( Figure 21.5B ). The afferents from viscera in the head and neck that enter the brainstem via the cranial nerves also terminate in the nucleus of the solitary tract (see Figure 21.5B ). This nucleus, as described in the next section, integrates a wide range of visceral sensory information and transmits to the hypothalamus and to the relevant motor nuclei in the brainstem tegmentum. Sensory fibers related to the viscera convey only limited information to consciousness—primarily pain. Nonetheless, the visceral afferent information of which we are not aware is essential for the functioning of autonomic reflexes. Specific examples described in more detail later in the chapter include afferent information relevant to cardiovascular control, to the control of the bladder, and to the governance of sexual functions (although sexual reflexes are, exceptionally, not mediated by the nucleus of the solitary tract) Sensory Inputs Produce a Wide Range of Visceral Reflexes To maintain homeostasis the autonomic nervous system responds to many different types of sensory inputs. Some of these are somatosensory. For example, a noxious stimulus activates sympathetic neurons that regulate local vasoconstriction (necessary to reduce bleeding when the skin is broken). At the same time, the stimulus activates nociceptive afferents in the spinothalamic tract with axon collaterals to an area in the rostral ventrolateral medulla that coordinates reflexes. These inputs cause widespread sympathetic activation that increases blood pressure and heart rate to protect arterial perfusion pressure and prepares the individual for vigorous defense. Homeostasis also requires important information about the internal state of the body. Much of this information from the thoracic and abdominal cavities reaches the brain via the vagus nerve. The glossopharyngeal nerve also conveys visceral sensory information from the head and neck. Both of these nerves and the facial nerve relay special visceral sensory information about taste (a visceral chemosensory function) from the oral cavity. All of these visceral sensory afferents synapse in a topographic fashion in the nucleus of the solitary tract. Taste information is represented most anteriorly; gastrointestinal information, in an intermediate position; cardiovascular inputs, caudomedially; and respiratory inputs, in the caudolateral part of the nucleus. Neurotransmission in the Visceral Motor System The neurotransmitter functions of the visceral motor system are of enormous importance in clinical practice, and drugs that act on the autonomic system are among the most important in the clinical armamentarium. Moreover, autonomic transmitters have played a major role in the history of efforts to understand synaptic function. Consequently, neurotransmission in the visceral motor system deserves special comment (see also Chapter 6 ). Acetylcholine is the primary neurotransmitter of both sympathetic and parasympathetic preganglionic neurons. Nicotinic receptors on autonomic ganglion cells are ligand-gated ion channels that mediate a so-called fast EPSP (much like nicotinic receptors at the neuromuscular junction). In contrast, muscarinic acetylcholine receptors on ganglion cells are members of the 7-transmembrane G protein-linked receptor family, and they mediate slower synaptic responses (see Chapters 7 and 8 ). The primary action of muscarinic receptors in autonomic ganglion cells is to close K+ channels, making the neurons more excitable and generating a prolonged EPSP. As a result of these two acetylcholine receptor types, ganglionic synapses mediate both rapid excitation and a slower modulation of autonomic ganglion cell activity. The postganglionic effects of autonomic ganglion cells on their smooth muscle, cardiac muscle, or glandular targets are mediated by two primary neurotransmitters: norepinephrine (NE) and acetylcholine (ACh). For the most part, sympathetic ganglion cells release norepinephrine onto their targets (a notable exception is the cholinergic sympathetic innervation of sweat glands), whereas parasympathetic ganglion cells typically release acetylcholine. As expected from the foregoing account, these two neurotransmitters usually have opposing effects on their target tissue—contraction versus relaxation of smooth muscle, for example. As described in Chapters 6 to 8 , the specific effects of either ACh or NE are determined by the type of receptor expressed in the target tissue, and the downstream signaling pathways to which these receptors are linked. Peripheral sympathetic targets generally have two subclasses of noradrenergic receptors in their cell membranes, referred to as α and β receptors. Like muscarinic ACh receptors, both α and β receptors and their subtypes belong to the 7-transmembrane G-protein-coupled class of cell surface receptors. The different distribution of these receptors in sympathetic targets allows for a variety of postsynaptic effects mediated by norepinephrine released from postganglionic sympathetic nerve endings ( Table 21.2 ). The effects of acetylcholine released by parasympathetic ganglion cells onto smooth muscles, cardiac muscle, and glandular cells also vary according to the subtypes of muscarinic cholinergic receptors found in the peripheral target ( Table 21.3 ). The two major subtypes are known as M1 and M2 receptors, M1 receptors being found primarily in the gut and M2 receptors in the cardiovascular system (another subclass of muscarinic receptors, M3, occurs in both smooth muscle and glandular tissues). Muscarinic receptors are coupled to a variety of intracellular signal transduction mechanisms that modify K+ and Ca2+channel conductances. They can also activate nitric oxide synthase, which promotes the local release of NO in some parasympathetic target tissues (see, for example, the section on autonomic control of sexual function). In contrast to the relatively restricted responses generated by norepinephrine and acetylcholine released by sympathetic and parasympathetic ganglion cells, respectively, neurons of the enteric nervous system achieve an enormous diversity of target effects by virtue of many different neurotransmitters, most of which are neuropeptides associated with specific cell groups in either the myenteric or submucous plexuses mentioned earlier. The details of these agents and their actions are beyond the scope of this introductory account Box 49-1 First Isolation of a Chemical Transmitter The existence of chemical messengers was first postulated by John Langley and Henry Dale and their students on the basis of their pharmacological studies dating from the beginning of the century. However, convincing evidence for a neurotransmitter was not provided until 1920, when Otto Loewi, in a simple but decisive experiment, examined the autonomic innervation of two isolated, beating frog hearts. In his own words: The night before Easter Sunday of that year I awoke, turned on the light, and jotted down a few notes on a tiny slip of paper. Then I fell asleep again. It occurred to me at six o'clock in the morning that during the night I had written down something most important, but I was unable to decipher the scrawl. The next night, at three o'clock, the idea returned. It was the design of an experiment to determine whether or not the hypothesis of chemical transmission that I had uttered seventeen years ago was correct. I got up immediately, went to the laboratory, and performed a simple experiment on a frog heart according to the nocturnal design. I have to describe briefly this experiment since its results became the foundation of the theory of chemical transmission of the nervous impulse. The hearts of two frogs were isolated, the first with its nerves, the second without. Both hearts were attached to Straub cannulas filled with a little Ringer solution. The vagus nerve of the first heart was stimulated for a few minutes. Then the Ringer solution that had been in the first heart during the stimulation of the vagus was transferred to the second heart. It slowed and its beat diminished just as if its vagus had been stimulated. Similarly, when the accelerator nerve was stimulated and the Ringer from this period transferred, the second heart speeded up and its beat increased. These results unequivocally proved that the nerves do not influence the heart directly but liberate from their terminals specific chemical substances which, in their turn, cause the well-known modifications of the function of the heart characteristic of the stimulation of its nerves. Loewi called this substance Vagusstoff (vagus substance). Soon after, Vagusstoff was identified chemically as acetylcholine. The nucleus of the solitary tract distributes visceral sensory information within the brain along three main pathways. Some neurons in the nucleus of the solitary tract directly innervate preganglionic neurons in the medulla and spinal cord, triggering direct autonomic reflexes. For example, there are direct inputs from the nucleus of the solitary tract to vagal motor neurons controlling esophageal and gastric motility, which are important for ingesting food. Also, projections from the nucleus of the solitary tract to the spinal cord are involved in respiratory reflex responses to lung inflation. Other neurons in the nucleus project to the lateral medullary reticular formation, where they engage populations of premotor neurons that organize more complex, patterned autonomic reflexes. For example, groups of neurons in the rostral ventrolateral medulla control blood pressure by regulating both blood flow to different vascular beds and vagal tone in the heart to modulate heart rate. Other groups of neurons control complex responses such as vomiting and respiratory rhythm (a somatic motor response that has an important autonomic component and that depends critically on visceral sensory information). The third main projection from the nucleus of the solitary tract provides visceral sensory input to a network of cell groups that extend from the pons and midbrain up through the hypothalamus, amygdala, and cerebral cortex. This network coordinates autonomic responses and integrates them into ongoing patterns of behavior. These will be described in more detail after we consider more elementary autonomic reflexes. Autonomic Neurons Use a Variety of Chemical Transmitters Autonomic ganglion cells receive and integrate inputs from both the central nervous system (through preganglionic nerve terminals) and the periphery (through branches of sensory nerves that terminate in the ganglia). Most of the sensory fibers are nonmyelinated and may release neuropeptides, such as substance P and calcitonin gene-related peptide (CGRP), onto ganglion cells. Preganglionic fibers primarily use ACh and norepinephrine as transmitters. Ganglionic Transmission Involves Both Fast and Slow Synaptic Potentials Preganglionic activity induces both brief and prolonged responses from postganglionic neurons. ACh released from preganglionic terminals evokes fast excitatory postsynaptic potentials (EPSPs) mediated by nicotinic ACh receptors. The fast EPSP is often large enough to generate an action potential in the postganglionic neuron, and it is thus regarded as the principal synaptic pathway for ganglionic transmission in both the sympathetic and parasympathetic systems. ACh also evokes slow EPSPs and inhibitory postsynaptic potentials (IPSPs) in postganglionic neurons. These slow potentials can modulate the excitability of these cells. They have been most often studied in sympathetic ganglia but are also known to occur in some parasympathetic ganglia. Slow EPSPs or IPSPs are mediated by muscarinic ACh receptors (Figure 49-6). The slow excitatory potential results when Na+ and Ca2+ channels open and M-type K+ channels close. The M-type channels are normally active at the resting membrane potential, so their closure leads to membrane depolarization (Chapter 13). The slow inhibitory potential results from the opening of K+ channels, allowing K+ ions to flow out of the nerve terminals, resulting in hyperpolarization. The fast cholinergic EPSP reaches a maximum within 10-20 ms; the slow cholinergic synaptic potentials take up to half a second to reach their maximum and last for a second or more (Figure 49-6). Even slower synaptic potentials, lasting up to a minute, are evoked by neuropeptides, a variety of which are present in the terminals of preganglionic neurons and sensory nerve endings. The actions of one peptide have been studied in detail and reveal important features of peptidergic transmission. In some, but not all, preganglionic nerve terminals in bullfrog sympathetic ganglia, ACh is colocalized with a luteinizing hormone-releasing hormone (LHRH)-like peptide. High-frequency stimulation of the preganglionic nerves causes the peptide to be released, evoking a slow, long-lasting EPSP in all postganglionic neurons (Figure 49-6), even those not directly innervated by the peptidergic fibers. The peptide must diffuse over considerable distances to influence distant receptive neurons. The slow peptidergic EPSP, like the slow cholinergic excitatory potential, also results from the closure of M-type channels and the opening of Na+ and Ca2+ channels. The peptidergic excitatory potential alters the excitability of autonomic ganglion cells for long periods after intense activation of preganglionic inputs. No mammalian equivalent of the actions of the LHRH-like peptide in amphibians has yet been identified, but the neuropeptide substance P released from sensory afferent terminals in mammals evokes a similar slow, long-lasting EPSP. Norepinephrine and Acetylcholine Are the Predominant Transmitters in the Autonomic Nervous System Most postganglionic sympathetic neurons release norepinephrine, which acts on a variety of different adrenergic receptors. There are five major types of adrenergic receptors, and these are the target for several medically important drugs (Table 49-1). ATP and Adenosine Have Potent Extracellular Actions Adenosine triphosphate (ATP) is an important cotransmitter with norepinephrine in many postganglionic sympathetic neurons. By acting on ATP-gated ion channels (P2 purinergic receptors), it is responsible for some of the fast responses seen in target tissues (Table 49-1). The proportion of ATP to norepinephrine varies considerably in different sympathetic nerves. The ATP component is relatively minor in nerves to blood vessels in the rat tail and rabbit ear, while the responses of guinea pig submucosal arterioles to sympathetic stimulation appear to be mediated solely by ATP. The nucleotide adenosine is formed from the hydrolysis of ATP and is recognized by P1 purinergic receptors (Table 49-1) located both pre- and postjunctionally. It is thought to play a modulatory role in autonomic transmission, particularly in the sympathetic system. Adenosine may dampen sympathetic function after intense sympathetic activation by activating receptors on sympathetic nerve endings that inhibit further norepinephrine and ATP release. Adenosine also has inhibitory actions in cardiac and smooth muscle that tend to oppose the excitatory actions of norepinephrine. Many Different Neuropeptides Are Present in Autonomic Neurons Neuropeptides are colocalized with norepinephrine and ACh in autonomic neurons. Cholinergic preganglionic neurons in the spinal cord and brain stem and their terminals in autonomic ganglia may contain enkephalins, neurotensin, somatostatin, or substance P. Noradrenergic postganglionic sympathetic neurons may also express a variety of neuropeptides. Neuropeptide Y is present in as many as 90% of the cells and modulates sympathetic transmission. In tissues in which the nerve endings are distant from their targets (more than 60 nm, as for the rabbit ear artery), neuropeptide Y potentiates both the purinergic and adrenergic components of the tissue response, probably by acting postsynaptically. In contrast, in tissues with dense sympathetic innervation and where the target is closer (20 nm, such as the vas deferens), neuropeptide Y acts presynaptically to inhibit release of ATP and norepinephrine, thus dampening the tissue response. The peptides galanin and dynorphin are often found with neuropeptide Y in sympathetic neurons, which can contain several neuropeptides. Cholinergic postganglionic sympathetic neurons commonly contain CGRP and vasoactive intestinal polypeptide (VIP) (Figure 49-7). Visceral Motor Reflex Functions Many examples of specific autonomic functions could be used to illustrate in more detail how the visceral motor system operates. The three outlined here—control of cardiovascular function, control of the bladder, and control of sexual function—have been chosen primarily because of their importance in human physiology and clinical practice Autonomic Regulation of Cardiovascular Function The cardiovascular system is subject to precise reflex regulation so that an appropriate supply of oxygenated blood can be reliably provided to different body tissues under a wide range of circumstances. The sensory monitoring for this critical homeostatic process entails primarily mechanical (barosensory) information about pressure in the arterial system and, secondarily, chemical (chemosensory) information about the level of oxygen and carbon dioxide in the blood. The parasympathetic and sympathetic activity relevant to cardiovascular control is determined by the information supplied by these sensors. The mechanoreceptors (called baroreceptors) are located in the heart and major blood vessels; the chemoreceptors are located primarily in the carotid bodies, which are small, highly specialized organs located at the bifurcation of the common carotid arteries (some chemosensory tissue is also found in the aorta). The nerve endings in baroreceptors are activated by deformation as the elastic elements of the vessel walls expand and contract. The chemoreceptors in the carotid bodies and aorta respond directly to the partial pressure of oxygen and carbon dioxide in the blood. Both afferent systems convey their status via the vagus nerve to the nucleus of the solitary tract ( Figure 21.7 ), which relays this information to the hypothalamus and the relevant brainstem tegmental nuclei (see earlier). The afferent information from changes in arterial pressure and blood gas levels reflexively modulates the activity of the relevant visceral motor pathways and, ultimately, of target smooth and cardiac muscles and other more specialized structures. For example, a rise in blood pressure activates baroreceptors that, via the pathway illustrated in Figure 21.7 , inhibit the tonic activity of sympathetic preganglionic neurons in the spinal cord. In parallel, the pressure increase stimulates the activity of the parasympathetic preganglionic neurons in the dorsal motor nucleus of the vagus and the nucleus ambiguus that influence heart rate. The carotid chemoreceptors also have some influence, but this is a less important drive than that stemming from the baroreceptors. As a result of this shift in the balance of sympathetic and parasympathetic activity, the stimulatory noradrenergic effects of postganglionic sympathetic innervation on the cardiac pacemaker and cardiac musculature is reduced (an effect abetted by the decreased output of catecholamines from the adrenal medulla and the decreased vasoconstrictive effects of sympathetic innervation on the peripheral blood vessels). At the same time, activation of the cholinergic parasympathetic innervation of the heart decreases the discharge rate of the cardiac pacemaker in the sinoatrial node and slows the ventricular conduction system. These parasympathetic influences are mediated by an extensive series of parasympathetic ganglia in and near the heart, which release acetylcholine onto cardiac pacemaker cells and cardiac muscle fibers. As a result of this combination of sympathetic and parasympathetic effects, heart rate and the effectiveness of the atrial and ventricular mycoardial contraction are reduced and the peripheral arterioles dilate, thus lowering the blood pressure. In contrast to this sequence of events, a drop in blood pressure, as might occur from blood loss, has the opposite effect, inhibiting parasympathetic activity while increasing sympathetic activity. As a result, norepinephrine is released from sympathetic postganglionic terminals, increasing the rate of cardiac pacemaker activity and enhancing cardiac contractility, at the same time increasing release of catecholamines from the adrenal medulla (which further augments these and many other sympathetic effects that enhance the response to this threatening situation). Norepinephrine released from the terminals of sympathetic ganglion cells also acts on the smooth muscles of the arterioles to increase the tone of the peripheral vessels, particularly those in the skin, subcutaneous tissues, and muscles, thus shunting blood away from these tissues to those organs where oxygen and metabolites are urgently needed to maintain function (e.g., brain, heart, and kidneys in the case of blood loss). If these reflex sympathetic responses fail to raise the blood pressure sufficiently (in which case the patient is said to be in shock), the vital functions of these organs begin to fail, often catastrophically. A more mundane circumstance that requires a reflex autonomic response to a fall in blood pressure is standing up. Rising quickly from a prone position produces a shift of some 300–800 milliliters of blood from the thorax and abdomen to the legs, resulting in a sharp (approximately 40%) decrease in the output of the heart. The adjustment to this normally occurring drop in blood pressure (called orthostatic hypotension) must be rapid and effective, as evidenced by the dizziness sometimes experienced in this situation. Indeed, normal individuals can briefly lose consciousness as a result of blood pooling in the lower extremities, which is the usual cause of fainting among healthy individuals who must stand still for abnormally long periods (the “Beefeaters” who guard Buckingham Palace, for example). The sympathetic innervation of the heart arises from the preganglionic neurons in the intermediolateral column of the spinal cord, extending from roughly the first through fifth thoracic segments (see Table 21.1 ). The primary visceral motor neurons are in the adjacent thoracic paravertebral and prevertebral ganglia of the cardiac plexus. The parasympathetic preganglionics, as already mentioned, are in the dorsal motor nucleus of the vagus nerve and the nucleus ambiguus, projecting to parasympathetic ganglia in and around the heart and great vessels Figure 21.4. Organization of the enteric component of the visceral motor system. (A) Sympathetic and parasympathetic innervation of the enteric nervous system, and the intrinsic neurons of the gut. (B) Detailed organization of nerve cell plexuses in the gut wall. The neurons of the submucus plexus (Meissner's plexus) are concerned with the secretory aspects of gut function, and the myenteric plexus (Auerbach's plexus) with the motor aspects of gut function (e.g., peristalsis). The Enteric Nervous System An enormous number of neurons are specifically associated with the gastrointestinal tract to control its many functions; indeed, more neurons are said to reside in the human gut than in the entire spinal cord. As already noted, the activity of the gut is modulated by both the sympathetic and the parasympathetic divisions of the visceral motor system. However, the gut also has an extensive system of nerve cells in its wall (as do its accessory organs such as the pancreas and gallbladder) that do not fit neatly into the sympathetic or parasympathetic divisions of the visceral motor system ( Figure 21.4A ). To a surprising degree, these neurons and the complex enteric plexuses in which they are found ( plexus means “network”) operate more or less independently according to their own reflex rules; as a result, many gut functions continue perfectly well without sympathetic or parasympathetic supervision (peristalsis, for example, occurs in isolated gut segments in vitro). Thus, most investigators prefer to classify the enteric nervous system as a separate component of the visceral motor system. The neurons in the gut wall include local and centrally projecting sensory neurons that monitor mechanical and chemical conditions in the gut, local circuit neurons that integrate this information, and motor neurons that influence the activity of the smooth muscles in the wall of the gut and glandular secretions (e.g., of digestive enzymes, mucus, stomach acid, and bile). This complex arrangement of nerve cells intrinsic to the gut is organized into: (1) the myenteric (or Auerbach's) plexus, which is specifically concerned with regulating the musculature of the gut; and (2) the submucus (or Meissner's) plexus, which is located, as the name implies, just beneath the mucus membranes of the gut and is concerned with chemical monitoring and glandular secretion ( Figure 21.4B ). As already mentioned, the preganglionic parasympathetic neurons that influence the gut are primarily in the dorsal motor nucleus of vagus nerve in the brainstem and the intermediate gray zone in the sacral spinal cord segments. The preganglionic sympathetic innervation that modulates the action of the gut plexuses derives from the thoraco-lumbar cord, primarily by way of the celiac, superior, and inferior mesenteric ganglia. Autonomic Regulation of the Bladder The autonomic regulation of the bladder provides a good example of the interplay between the voluntary motor system (obviously, we have voluntary control over urination), and the sympathetic and parasympathetic divisions of the visceral motor system, which operate involuntarily. The arrangement of afferent and efferent innervation of the bladder is shown in Figure 21.8 . The parasympathetic control of the bladder musculature, the contraction of which causes bladder emptying, originates with neurons in the sacral spinal cord segments (S2–S4) that innervate visceral motor neurons in parasympathetic ganglia in or near the bladder wall. Mechanoreceptors in the bladder wall supply visceral afferent information to the spinal cord and to higher autonomic centers in the brainstem (primarily the nucleus of the solitary tract), which in turn project to the various central coordinating centers for bladder function in the brainstem tegmentum and elsewhere. The sympathetic innervation of the bladder originates in the lower thoracic and upper lumbar spinal cord segments (T10-L2), the preganglionic axons running to sympathetic neurons in the inferior mesenteric ganglion and the ganglia of the pelvic plexus. The postganglionic fibers from these ganglia travel in the hypogastric and pelvic nerves to the bladder, where sympathetic activity causes the internal urethral sphincter to close (postganglionic sympathetic fibers also innervate the blood vessels of the bladder, and in males the smooth muscle fibers of the prostate gland). Stimulation of this pathway in response to a modest increase in bladder pressure from the accumulation of urine thus closes the internal sphincter and inhibits the contraction of the bladder wall musculature, allowing the bladder to fill. At the same time, moderate distension of the bladder inhibits parasympathetic activity (which would otherwise contract the bladder and allow the internal sphincter to open). When the bladder is full, afferent activity conveying this information centrally increases parasympathetic tone and decreases sympathetic activity, causing the internal sphincter muscle to relax and the bladder to contract. In this circumstance, the urine is held in check by the voluntary (somatic) motor innervation of the external urethral sphincter muscle (see Figure 21.8 ). The voluntary control of the external sphincter is mediated by α-motor neurons of the ventral horn in the sacral spinal cord segments (S2–S4), which cause the striated muscle fibers of the sphincter to contract. During bladder filling (and subsequently, until circumstances permit urination) these neurons are active, keeping the external sphincter closed and preventing bladder emptying. During urination (or “voiding,” as clinicians often call this process), this tonic activity is temporarily inhibited, leading to relaxation in the external sphincter muscle. Thus, urination results from the coordinated activity of sacral parasympathetic neurons and temporary inactivity of the α-motor neurons of the voluntary motor system. The central governance of these events stems from the rostral pons, the relevant pontine circuitry being referred to as the micturition center ( micturition is also “medicalese” for urination). This phrase implies more knowledge about the central control of bladder function than is actually available. As many as five other central regions have been implicated in the coordination of urinary functions, including the locus coeruleus, the hypothalamus, the septal nuclei, and several cortical regions. The cortical regions primarily concerned with the voluntary control of bladder function include the paracentral lobule, the cingulate gyrus, and the frontal lobes. This functional distribution accords the motor representation of perineal musculature in the medial part of the primary motorcortex (see Chapter 17 ), and the planning functions of the frontal lobes (see Chapter 26 ), which are equally pertinent to bodily functions (remembering to stop by the bathroom before going on a long trip, for instance). Importantly, paraplegic patients, or patients who have otherwise lost descending control of the sacral spinal cord, continue to exhibit autonomic regulation of bladder function, since urination is eventually stimulated reflexively at the level of the sacral cord by sufficient bladder distension. Unfortunately, this reflex is not efficient in the absence of descending motor control, resulting in a variety of problems in paraplegics and others with diminished or absent central control of bladder function. The major difficulty is incomplete bladder emptying, which often leads to chronic urinary tract infections from the culture medium provided by retained urine, and thus the need for an indwelling catheter to ensure adequate drainage Urogenital Reflexes The control of bladder emptying is unusual because it involves both involuntary autonomic reflexes and some voluntary control. The excitatory input to the bladder wall that causes contraction and promotes emptying is parasympathetic. Activation of parasympathetic postganglionic neurons in the pelvic ganglion plexus near to and within the bladder wall contracts the bladder's smooth muscle. These neurons are quiet when the bladder begins to fill but are activated reflexly by visceral afferents when the bladder is distended. The sympathetic nervous system relaxes the bladder smooth muscle. Axons of preganglionic sympathetic neurons project from the thoracic and upper lumbar spinal cord to the inferior mesenteric ganglion. From there, postganglionic fibers travel to the bladder in the hypogastric nerve. When the sympathetic system is activated by low-frequency firing in sensory afferents that respond to tension in the bladder wall, the parasympathetic neurons in the pelvic ganglion are inhibited, relaxing bladder smooth muscle and exciting the internal sphincter muscle. Thus, during bladder filling the sympathetic system promotes relaxation of the bladder wall directly while maintaining closure of the internal sphincter. Somatic motor neurons in the ventral horn of the sacral spinal cord innervate striated muscle fibers in the external urethral sphincter, causing it to contract. These motor neurons are stimulated by visceral afferents that are activated when the bladder is partially full. As the bladder fills, spinal sensory afferents relay this information to a region in the pons that coordinates micturition. This pontine area, sometimes called Barrington's nucleus after the British neurophysiologist who first described it, also receives important descending inputs from the forebrain concerning behavioral cues for emptying the bladder. Descending pathways from Barrington's nucleus cause coordinated inhibition of sympathetic and somatic systems, relaxing both sphincters. The onset of urinary flow through the urethra causes reflex contraction of the bladder that is under parasympathetic control. In patients with spinal cord injuries at the cervical or thoracic levels, the spinal reflex control of micturition remains intact, but the connections with the pons are severed. As a result, micturition cannot be voluntarily controlled. When it does occur as a spinal reflex resulting from bladder overfilling, urination is incomplete. As a result, urinary tract infections are common, and it may be necessary to empty the bladder mechanically by catheterization Sexual reflexes are organized in a pattern that is analogous to those controlling bladder function. Erectile tissue is controlled largely by the parasympathetic nervous system, involving neurons that produce nitrous oxide as their main mediator. Glandular secretion is also parasympathetically mediated. Ejaculation in males is caused by sympathetic control of the seminal vesicles and vas deferens, and emission involves control of striated muscles in the pelvic floor as well. Supraspinal inputs play an important role in producing the coordinated pattern of sexual response, although some simple sexual reflexes can be activated even after spinal transection (eg, penile erection can be elicited by local sensory stimuli). Autonomic Regulation of Sexual Function Much like control of the bladder, sexual responses are mediated by the coordinated activity of sympathetic, parasympathetic, and somatic innervation. Although these reflexes differ in detail in males and females, basic similarities allow the two sexes to be considered together, not only in humans but in mammals generally (see Chapter 30 ). The relevant autonomic effects include: (1) the mediation of vascular dilation, which causes penile or clitoral erection; (2) stimulation of prostatic or vaginal secretions; (3) smooth muscle contraction of the vas deferens during ejaculation or rhythmic vaginal contractions during orgasm in females; and (4) contractions of the somatic pelvic muscles that accompany orgasm in both sexes. Like the urinary tract, the reproductive organs receive preganglionic parasympathetic innervation from the sacral spinal cord, preganglionic sympathetic innervation from the outflow of the lower thoracic and upper lumbar spinal cord segments, and somatic motor innervation from α-motor neurons in the ventral horn of the lower spinal cord segments ( Figure 21.9 ). The sacral parasympathetic pathway controlling the sexual organs in both males and females originates in the sacral segments S2–S4 and reaches the target organs via the pelvic nerves. Activity of the postganglionic neurons in the relevant parasympathetic ganglia causes dilation of penile or clitoral arteries, and a corresponding relaxation of the smooth muscles of the venous (cavernous) sinusoids, which leads to expansion of the sinusoidal spaces. As a result, the amount of blood in the tissue is increased, leading to a sharp rise in the pressure and an expansion of the cavernous spaces (i.e., erection). The mediator of the smooth muscle relaxation leading to erection is not acetylcholine (as in most postganglionic parasympathetic actions), but nitric oxide (see Chapter 8 ). The drug sildenafil (Viagra®), for instance, acts by stimulating the activity of guanylate cyclase, which increases the conversion of GTP to cyclic GMP, mimicking the action of NO on the c-GMP pathway, thus enhancing the relaxation of the venous sinusoids and promoting erection in males with erectile dysfunction. Parasympathetic activity also provides excitatory input to the vas deferens, seminal vesicles, and prostate in males, or vaginal glands in females. In contrast, sympathetic activity causes vasoconstriction and loss of erection. The lumbar sympathetic pathway to the sexual organs originates in the thoraco-lumbar segments (T11-L2) and reaches the target organs via the corresponding sympathetic chain ganglia and the inferior mesenteric and pelvic ganglia, as in the case of the autonomic bladder control. The afferent effects of genital stimulation are conveyed centrally from somatic sensory endings via the dorsal roots of S2–S4, eventually reaching the somatic sensory cortex (reflex sexual excitation may also occur by local stimulation, as is evident in paraplegics). The reflex effects of such stimulation are increased parasympathetic activity, which, as noted, causes relaxation of the smooth muscles in the wall of the sinusoids and subsequent erection. Finally, the somatic component of reflex sexual function arises from α-motor neurons in the lumbar and sacral spinal cord segments. These neurons provide excitatory innervation to the bulbocavernosus and ischiocavernosus muscles, which are active during ejaculation in males and mediate the contractions of the perineal (pelvic floor) muscles that accompany orgasm in both male and females. Sexual functions are governed centrally by the anterior-medial and medial-tuberal zones of the hypothalamus, which contain a variety of nuclei pertinent to visceral motor control and reproductive behavior (see Box A ). Although they remain poorly understood, these nuclei act as integrative centers for sexual responses and are also thought to be involved in more complex aspects of sexuality, such as sexual preference and gender identity (see Chapter 30 ). The relevant hypothalamic nuclei receive inputs from several areas of the brain, including—as one might imagine—the cortical and subcortical structures concerned with emotion and memory (sees Chapters 29 and 31 ) Female Reproductive System Sympathetic Innervation The sympathetic preganglionic neurons innervating the smooth muscle of the uterine wall are located in the IML at the T12-L2 level. Their preganglionic fibers pass through the sympathetic chain, exit in the lumbar splanchnic nerves, and synapse on postganglionic neurons in the inferior mesenteric ganglion. The postganglionic fibers from these neurons pass through the hypogastric plexus and innervate the female sexual organ (vagina) and the uterus (Fig. 22-10). Some preganglionic fibers from L1-L2 spinal segments descend in the sympathetic chain and synapse on postganglionic neurons in the hypogastric plexus. The postganglionic fibers from these neurons then innervate the female erectile tissue (clitoris) (Fig. 22-1). Activation of the sympathetic nervous system results in contraction of the uterus. Parasympathetic Innervation The location of the parasympathetic preganglionic neurons and the pathways they follow to innervate the uterus and female sexual organ are similar to those described for the male sexual organ (Fig. 22-10). The mechanism of vasodilation in the female erectile tissue (clitoris) is similar to that described for the male sexual organ. Parasympathetic stimulation causes stimulation of the female erectile tissue and relaxation of the uterine smooth muscle. The relaxation of the uterine smooth muscle may be variable due to hormonal influences on this muscle. The pain-sensing neurons innervating the uterus are located in the dorsal root ganglia at T12-L2 and S2-S4. Their peripheral axons pass through the hypogastric plexus and terminate in the uterus, while their central terminals synapse in the substantia gelatinosa at the T12-L2 and S2-S4 levels. The secondary pain-sensing neurons then project to the cerebral cortex via the thalamus (see Chapter 15). Medullary, Pontine, and Mesencephalic Control of the Autonomic Nervous System Many neuronal areas in the brain stem reticular substance and along the course of the tractus solitarius of the medulla, pons, and mesencephalon, as well as in many special nuclei (Figure 60–5), control different autonomic functions such as arterial pressure, heart rate, glandular secretion in the gastrointestinal tract, gastrointestinal peristalsis, and degree of contraction of the urinary bladder. Control of each of these is discussed at appropriate points in this text. Suffice it to point out here that the most important factors controlled in the brain stem are arterial pressure, heart rate, and respiratory rate . Indeed, transection of the brain stem above the midpontine level allows basal control of arterial pressure to continue as before but prevents its modulation by higher nervous centers such as the hypothalamus. Conversely, transection immediately below the medulla causes the arterial pressure to fall to less than one-half normal. Closely associated with the cardiovascular regulatory centers in the brain stem are the medullary and pontine centers for regulation of respiration, which are discussed in Chapter 41. Although this is not considered to be an autonomic function, it is one of the involuntary functions of the body. Control of Brain Stem Autonomic Centers by Higher Areas. Signals from the hypothalamus and even from the cerebrum can affect the activities of almost all the brain stem autonomic control centers. For instance, stimulation in appropriate areas mainly of the posterior hypothalamus can activate the medullary cardiovascular control centers strongly enough to increase arterial pressure to more than twice normal. Likewise, other hypothalamic centers control body temperature, increase or decrease salivation and gastrointestinal activity, and cause bladder emptying. To some extent, therefore, the autonomic centers in the brain stem act as relay stations for control activities initiated at higher levels of the brain, especially in the hypothalamus. In Chapters 58 and 59, it is pointed out also that many of our behavioral responses are mediated through (1) the hypothalamus, (2) the reticular areas of the brain stem, and (3) the autonomic nervous system. Indeed, some higher areas of the brain can alter function of the whole autonomic nervous system or of portions of it strongly enough to cause severe autonomic-induced disease such as peptic ulcer of the stomach or duodenum, constipation, heart palpitation, or even heart attack. The Hypothalamus The hypothalamus is located at the base of the forebrain, bounded by the optic chiasm rostrally and the midbrain tegmentum caudally. It forms the floor and ventral walls of the third ventricle and is continuous through the infundibular stalk with the posterior pituitary gland, as illustrated in figure A. Because of its central position in the brain and its proximity to the pituitary, it is not surprising that the hypothalamus integrates information from the forebrain, brainstem, spinal cord, and various endocrine systems, being particularly important in the central control of visceral motor functions. The hypothalamus comprises a large number of distinct nuclei, each with its own complex pattern of connections and functions. The nuclei, which are intricately interconnected, can be grouped in three longitudinal regions referred to as periventricular , medial , and lateral . They can also be grouped along the anterior—posterior dimension, which are referred to as the anterior (or preoptic), tuberal , and posterior regions (figure B). The anterior periventricular group contains the suprachiasmatic nucleus, which receives direct retinal input and drives circadian rhythms (see Chapter 28 ). More scattered neurons in the periventricular region (located along the wall of the third ventricle) manufacture peptides known as releasing or inhibiting factors that control the secretion of a variety of hormones by the anterior pituitary. The axons of these neurons project to the median eminence, a region at the junction of the hypothalamus and pituitary stalk, where the peptides are secreted into the portal circulation that supplies the anterior pituitary. Nuclei in the anterior-medial region include the paraventricular and supra- optic nuclei, which contain the neurosecretory neurons whose axons extend into the posterior pituitary. With appropriate stimulation, these neurons secrete oxytocin or vasopressin (antidiuretic hormone) directly into the bloodstream. Other neurons in the paraventricular nucleus project to the preganglionic neurons of the sympathetic and parasympathetic divisions in the brainstem and spinal cord. It is these cells that are thought to exert hypothalamic control over the visceral motor system and to modulate the activity of the poorly defined nuclei in the brainstem tegmentum that organize specific autonomic reflexes such as respiration and vomiting. The paraventricular nucleus, like other hypothalamic nuclei, receives inputs from the other hypothalamic zones, which are in turn related to the cortex, hippocampus, amygdala, and other central structures that, as noted in the text, are all capable of influencing visceral motor function. The medial-tuberal region nuclei ( tuberal refers to the tuber cinereum, the anatomical name given to the middle portion of the inferior surface of the hypothalamus) include the dorsomedial and ventromedial nuclei, which are involved in feeding, reproductive and parenting behavior, thermoregulation, and water balance. These nuclei receive inputs from structures of the limbic system, as well as from visceral sensory nuclei in the brainstem (e.g., the nucleus of the solitary tract). Finally, the lateral region of the hypothalamus is really a rostral continuation of the midbrain reticular formation. Thus, the neurons of the lateral region are not grouped into nuclei, but are scattered among the fibers of the medial forebrain bundle, which runs through the lateral hypothalamus. These cells control behavioral arousal and shifts of attention, especially as related to reproductive activities. In summary, the hypothalamus regulates an enormous range of physiological and behavioral activities, including control of body temperature, sexual activity, reproductive endocrinology, and attack-and-defense (aggressive) behavior. It is not surprising, then, that this intricate structure is the key controlling center for visceral motor activity and for homeostatic functions generally. Figure 49-11 The structure of the hypothalamus. A. Frontal view of the hypothalamus (section along the plane shown in part B). B. A medial view shows most of the main nuclei. The hypothalamus is often divided analytically into three areas in a rostocaudal direction: the preoptic area, the tuberal level, and the posterior level. The Hypothalamus Integrates Autonomic and Endocrine Functions With Behavior The hypothalamus plays a particularly important role in regulating the autonomic nervous system and was once referred to as the “head ganglion” of the autonomic nervous system. But recent studies of hypothalamic function have led to a somewhat different view. Whereas early studies found that electrical stimulation or lesions in the hypothalamus can profoundly affect autonomic function, more recent investigations have demonstrated that many of these effects are due to involvement of descending and ascending pathways of the cerebral cortex or the basal forebrain passing through the hypothalamus. Modern studies indicate that the hypothalamus functions to integrate autonomic response and endocrine function with behavior, especially behavior concerned with the basic homeostatic requirements of everyday life. The hypothalamus serves this integrative function by regulation of five basic physiological needs: It controls blood pressure and electrolyte composition by a set of regulatory mechanisms that range from control of drinking and salt appetite to the maintenance of blood osmolality and vasomotor tone. It regulates body temperature by means of activities ranging from control of metabolic thermogenesis to behaviors such as seeking a warmer or cooler environment. It controls energy metabolism by regulating feeding, digestion, and metabolic rate. It regulates reproduction through hormonal control of mating, pregnancy, and lactation. It controls emergency responses to stress, including physical and immunological responses to stress by regulating blood flow to muscle and other tissues and the secretion of adrenal stress hormones. The hypothalamus regulates these basic life processes by recourse to three main mechanisms. First, the hypothalamus has access to sensory information from virtually the entire body. It receives direct inputs from the visceral sensory system and the olfactory system, as well as the retina. The visual inputs are used by the suprachiasmatic nucleus to synchronize the internal clock mechanism to the day-night cycle in the external world (Chapter 3). Visceral somatosensory inputs carrying information about pain are relayed to the hypothalamus from the spinal and trigeminal dorsal horn (Chapters 23 and 24). In addition, the hypothalamus has internal sensory neurons that are responsive to changes in local temperature, osmolality, glucose, and sodium, to name a few examples. Finally, circulating hormones such as angiotensin II and leptin enter the hypothalamus at specialized zones along the margins of the third ventricle called circumventricular organs , where they interact directly with hypothalamic neurons. Second, the hypothalamus compares sensory information with biological set points. It compares, for example, local temperature in the preoptic area to the set point of 37°C and, if the hypothalamus is warm, activates mechanisms for heat dissipation. There are set points for a wide variety of physiological processes, including blood sugar, sodium, osmolality, and hormone levels. Finally, when the hypothalamus detects a deviation from a set point, it adjusts an array of autonomic, endocrine, and behavioral responses to restore homeostasis. If the body is too warm, the hypothalamus shifts blood flow from deep to cutaneous vascular beds and increases sweating, to increase heat loss through the skin. It increases vasopressin secretion, to conserve water for sweating. Meanwhile, the hypothalamus activates coordinated behaviors, such as seeking to change the local ambient temperature or seeking a cooler environment. All of these processes must be precisely coordinated. For example, adjustments in blood flow in different vascular beds are important for such diverse activities as thermoregulation, digestion, response to emergency, and sexual intercourse. In order to do this, the hypothalamus contains an array of specialized cell groups with different functional roles. The Hypothalamus Contains Specialized Groups of Neurons Clustered in Nuclei Although the hypothalamus is very small, occupying only about 4 grams of the total 1400 grams of adult human brain weight, it is packed with a complex array of cell groups and fiber pathways (Figure 49-11). The hypothalamus can be divided into three regions: anterior, middle, and posterior. The most anterior part of the hypothalamus, overlying the optic chiasm, is the preoptic area. The preoptic nuclei, which include the circadian pacemaker (suprachiasmatic nucleus), are mainly concerned with integration of different kinds of sensory information needed to judge deviation from physiological set point. The preoptic area controls blood pressure and composition; cycles of activity, body temperature, and many hormones; and reproductive activity. The middle third of the hypothalamus, overlying the pituitary stalk, contains the dorsomedial, ventromedial, paraventricular, supraoptic, and arcuate nuclei. The paraventricular nucleus includes both magnocellular and parvocellular neuroendocrine components controlling the posterior and anterior pituitary gland. In addition, it contains neurons that innervate both the parasympathetic and sympathetic preganglionic neurons in the medulla and the spinal cord, thus playing a major role also in regulating autonomic responses. The arcuate and periventricular nuclei, along the wall of the third ventricle, like the paraventricular nucleus contain parvocellular neuroendocrine neurons, whereas the supraoptic nucleus contains additional magnocellular neuroendocrine cells. The ventromedial and dorsomedial nuclei project mainly locally within the hypothalamus and to the periaqueductal gray matter, to regulate complex integrative functions such as control of growth, feeding, maturation, and reproduction. Finally, the posterior third of the hypothalamus includes the mammillary body and the overlying posterior hypothalamic area. In addition to the mammillary nuclei, whose function remains enigmatic, this region includes the tuberomammillary nucleus, a histaminergic cell group that is important in regulating wakefulness and arousal. The major nuclei of the hypothalamus are located for the most part in the medial part of the hypothalamus, sandwiched between two major fiber systems. A massive longitudinal fiber pathway, the medial forebrain bundle , runs through the lateral hypothalamus. The medial forebrain bundle connects the hypothalamus with the brain stem below, and with the basal forebrain, amygdala, and cerebral cortex above. Large neurons scattered among the fibers of the medial forebrain bundle provide long-ranging hypothalamic outputs that reach from the cerebral cortex to the sacral spinal cord. They are involved in organizing behaviors as well as autonomic responses. A second, smaller fiber system is located medial to the major hypothalamic nuclei, in the wall of the third ventricle. This periventricular fiber system contains longitudinal fibers that link the hypothalamus to the periaqueductal gray matter in the midbrain. This pathway is thought to be important in activating simple, stereotyped behavioral patterns, such as posturing during sexual behavior. The periventricular system also conveys the axons of the parvocellular neuroendocrine neurons located in the periventricular region, and including the paraventricular and arcuate nuclei, to the median eminence, for control of the anterior pituitary gland. They are met in the median eminence by the axons from the magnocellular neurons, which continue down the pituitary stalk to the posterior pituitary gland. Inputs from limbic structures. Projections from hippocampal formation through the fornix (shown in red), amygdala through the stria terminalis (shown in green), and septal area through the medial forebrain bundle (shown in blue) to the hypothalamus. Other inputs, such as those from the brainstem, are omitted from this diagram. Inputs from cerebral cortex. Diagrams illustrate the pathways by which the prefrontal cortex (green) and anterior cingulate gyrus (red) supply the hypothalamus by virtue of relays in the mediodorsal and midline thalamic nuclear groups (blue). Efferent projections of the mammillary bodies to the anterior thalamic nucleus via the mammillothalamic tract and to the midbrain tegmentum via the mammillotegmental tract. Major efferent projections of the hypothalamus. Not shown in this illustration are connections from the hypothalamus to the pituitary gland. Figure 49-12 The hypothalamus controls the pituitary gland both directly and indirectly through hormone-releasing neurons. Peptidergic neurons (5) release oxytocin or vasopressin into the general circulation through the posterior pituitary. Two general types of neurons are involved in regulation of the anterior pituitary. Peptidergic neurons (3, 4) synthesize and release hormones into the hypophyseal-portal circulation. The second type of neuron is the link between the peptidergic neurons and the rest of the brain. These neurons, some of which are monoaminergic, are believed to form synapses with peptidergic neurons either on the cell body (1) or on the axon terminal (2). The Hypothalamus Controls the Endocrine System The hypothalamus controls the endocrine system directly , by secreting neuroendocrine products into the general .979 circulation from the posterior pituitary gland, and indirectly , by secreting regulatory hormones into the local portal circulation, which drains into the blood vessels of the anterior pituitary (Figure 49-12). These regulatory hormones control the synthesis and release of anterior pituitary hormones into the general circulation. The highly fenestrated (perforated) capillaries of the posterior pituitary and median eminence of the hypothalamus facilitate the entry of hormones into the general circulation or the portal plexus. Direct and indirect control form the basis of our modern understanding of hypothalamic control of endocrine activity. Magnocellular Neurons Secrete Oxytocin and Vasopressin Directly From the Posterior Pituitary Large neurons in the paraventricular and supraoptic nuclei, constituting the magnocellular region of the hypothalamus, project to the posterior pituitary gland ( neurohypophysis ). Some of the magnocellular neuroendocrine neurons in the paraventricular and supraoptic nuclei release the neurohypophyseal hormone oxytocin, while others release vasopressin into the general circulation by way of the posterior pituitary (Figure 49-13). These peptides circulate to target organs of the body that control water balance and milk release. Oxytocin and vasopressin are peptides that contain nine amino acid residues (Table 49-2). Like other peptide hormones, they are cleaved from larger prohormones (see Chapter 15). The prohormones are synthesized in the cell body and cleaved within transport vesicles as they travel down the axons. The peptide neurophysin is a cleavage product of the processing of vasopressin and oxytocin and is released along with the hormone in the posterior pituitary. The neurophysin formed in neurons that release vasopressin differs somewhat from that produced in neurons that release oxytocin. Parvocellular Neurons Secrete Peptides That Regulate Release of Anterior Pituitary Hormones Geoffrey Harris proposed in the 1950s that the anterior pituitary gland is regulated indirectly by the hypothalamus. He demonstrated that the hypophysial portal veins, which carry blood from the hypothalamus to the anterior pituitary gland, convey important signals that control anterior pituitary secretion. In the 1970s the structure of a series of peptide hormones that carry these signals was elucidated. These hormones fall into two classes: releasing hormones and release-inhibiting hormones (Table 49-3). Of all the anterior pituitary hormones, only prolactin is under predominantly inhibitory control. Hence transection of the pituitary stalk causes insufficiency of adrenal cortex, thyroid, gonadal, and growth hormones, but increased prolactin secretion. Systematic electrical recordings have not been made from neurons that secrete releasing hormones. However, they are believed to fire in bursts because of the pulsatile nature of secretion of the anterior pituitary hormones, which show periodic surges throughout the day. Episodic firing may be particularly effective for causing hormone release and may limit receptor inactivation. The neurons that make releasing hormones are found mainly along the wall of the third ventricle. The gonadotropin-releasing hormone (GnRH) neurons tend to be located most anteriorly, along the basal part of the third ventricle. Neurons that make somatostatin, corticotropin-releasing hormone (CRH), and dopamine are located more dorsally and are found in the medial part of the paraventricular nucleus. Neurons that make growth hormone-releasing hormone (GRH), thyrotropin-releasing hormone (TRH), GnRH, and dopamine are found in the arcuate nucleus, an expansion of the periventricular gray matter that overlies the median eminence, in the floor of the third ventricle (see Figure 49-10). The median eminence contains a plexus of fine capillary loops. These are fenestrated capillaries, and the terminals of the neurons that contain releasing hormones end on these loops. The blood then flows from the median eminence into a secondary (portal) venous system, which carries it to the anterior pituitary gland (See Figure 49-11). An Overall View The three divisions of the autonomic nervous system comprise an integrated motor system that acts in parallel with the somatic motor system and is responsible for homeostasis. Esential to the functioning of the motor outflow are the visceral sensory afferents that are relayed from the nucleus of the solitary tract through a network of central autonomic control nuclei. The hypothalamus integrates somatic, visceral, and behavioral information from all of these sources, thus coordinating autonomic and endocrine outflow with behavioral state. Several features of the autonomic nervous system permit rapid integrated responses to changes in the environment. The activity of effector organs is finely controlled by coordinated and balanced excitatory and inhibitory inputs from tonically active postganglionic neurons. Moreover, the sympathetic system is greatly divergent, permitting the entire body to respond to extreme conditions In addition to the small molecule neurotransmitters— ACh and norepinephrine—a wide variety of peptides are thought to be released by autonomic neurons either onto postganglionic cells or their targets. Many of these peptides act to alter the efficacy of cholinergic or adrenergic transmission. The autonomic nervous system uses a rich variety of chemical mediators, several of which may commonly coexist in single autonomic neurons. The release of different combinations of chemical mediators from autonomic neurons may represent a means of “chemical coding” of information transfer in the different branches of the autonomic nervous system, although we are still only beginning to learn how to read the code As we shall also see in the following two chapters, the autonomic nervous system is a remarkably adaptable system of homeostatic control. It can function locally through branches of primary sensory fibers that terminate in autonomic ganglia, or intrinsically through the entire nervous system on the functions of the digestive tract. Control centers in the brain stem are involved in several autonomic reflexes. While the hypothalamus integrates behavioral and emotional responses arising from the forebrain with ongoing metabolic need to produce highly coordinated autonomic control and behavior. Summary of the central control of the visceral motor system. The major organizing center for visceral motor functions is the hypothalamus (see Box A ). Central Control of the Visceral Motor Functions The visceral motor system is regulated in part by circuitry in the cerebral cortex: Involuntary visceral reactions such as blushing in response to consciously embarrassing stimuli, vasoconstriction and pallor in response to fear, and autonomic responses to sexual situations make this plain. Indeed, autonomic function is intimately related to emotional experience and expression, as described in Chapter 29 . In addition, the hippocampus, thalamus, basal ganglia, cerebellum, and reticular formation all influence the visceral motor system. The major center in the control of the visceral motor system, however, is the hypothalamus ( Box A ). The hypothalamic nuclei relevant to visceral motor function project to the nuclei in the brainstem that organize many visceral reflexes (e.g., respiration, vomiting, urination), to the cranial nerve nuclei that contain parasympathetic preganglionic neurons, and to the sympathetic and parasympathetic preganglionic neurons in the spinal cord. The general organization of this central autonomic control is summarized in Figure 21.6 , and some important clinical manifestations of damage to this descending system are illustrated in Box B . Although the hypothalamus is the key structure in the overall organization of visceral function, and in homeostasis generally, the visceral motor systemcontinues to function independently if disease or injury impedes the influence of this controlling center. The major subcortical centers for the ongoing regulation of the autonomic function in the absence of hypothalamic control are a series of poorly understood nuclei in the brainstem tegmentum that organize specific visceral functions such as cardiac reflexes, reflexes that control the bladder, and reflexes related to sexual function, as well as other critical autonomic reflexes such as respiration and vomiting. The afferent information from the viscera that drives these brainstem centers is, as noted already, received by neurons in the nucleus of the solitary tract, which relays these signals to the hypothalamus and to the various autonomic centers in the brainstem tegmentum. Figure 51-1 Homeostatic processes can be analyzed in terms of control systems. A. A control system regulates a controlled variable. When a feedback signal indicates the controlled variable is below or above the set point an error signal is generated. This signal turns on (or facilitates) appropriate behaviors and physiological responses, and turns off (or suppresses) incompatible responses. An error signal also can be generated by external (incentive) stimuli. B. A negative feedback system without a set point controls fat stores. (Based on data of DiGirolamo and Rudman 1968.) SO FAR IN THIS BOOK OUR discussion of the neural control of behavior has focused on how the brain translates external sensory information about events in the environment into coherent perceptions and motor action. In the final two parts of the book we examine how development and learning profoundly shape the brain's ability to do this. These parts of the book are to a large degree concerned with the cognitive aspects of behavior—what a person knows about the outside world. However, behavior also has non cognitive aspects that reflect not what the individual knows but what he or she needs or wants. Here we are concerned with how individuals respond to internal rather than external stimuli. This is the domain of motivation. Motivation is a catch-all term that refers to a variety of neuronal and physiological factors that initiate, sustain, and direct behavior. These internal factors are thought to explain, in part, variation in the behavior of an individual over time. As discussed earlier in this book, the behaviorists who dominated the study of behavior in the first half of this century largely ignored internal factors in their attempts to explain behavior. With the rise of cognitive psychology a few decades ago the behaviorist paradigm has receded and motivation, with all of its complexity, has become the subject of serious scientific study once again. The biological study of motivation has until quite recently been confined to studies of simple physiological or homeostatic instances of motivation called drive states. For this reason our discussion here focuses primarily on drive states, which are the outcome of homeostatic processes related to hunger, thirst, and temperature regulation. Drive states are characterized by tension and discomfort due to a physiological need followed by relief when the need is satisfied. It is important to recognize, however, that drive states are merely one subtype, perhaps the simplest examples, of the motivational states that direct behavior. In general, motivational states may be broadly classified into two types: (1) elementary drive states and more complex physiological regulatory forces brought into play by alterations in internal physical conditions such as hunger, thirst, and temperature, and (2) personal or social aspirations acquired by experience. Freud and contemporary cognitive psychologists have suggested that both forms, but especially personal and social aspirations, represent a complex interplay between physiological and social forces, and between conscious and unconscious mental processes. The neurobiological study of the second type of motivational states is in its infancy. The issues that surround drive states relate to survival. Activities that enhance immediate survival, such as eating or drinking, or those that ensure long-term survival, such as sexual behavior or caring for offspring, are pleasurable and there is a great natural urge to repeat these behaviors. Drive states steer behavior toward specific positive goals and away from negative ones. In addition, drive states require organization of individual behaviors into a goal-oriented sequence. Attainment of the goal decreases the intensity of the drive state and thus the motivated behavior ceases. A hungry cat is ever alert for the occasional mouse, ready to pounce when it comes into sight. Once satiated, the cat will not pounce again for some time. Finally, drive states have general effects; they increase our general level of arousal and thereby enhance our ability to act. Drive states therefore serve three functions: they direct behavior toward or away from a specific goal; they organize individual behaviors into a coherent, goal-oriented sequence; and they increase general alertness, energizing the individual to act. The drive states that neurobiologists have studied most effectively are those related to temperature regulation, hunger, and thirst. Until recently, these drive states were inferred from behavior alone. But as we learn more about the physiological correlates of drive states, we rely less on traditional psychological concepts of motivation and more on concepts derived from servo-control models applied to living organisms. Admittedly, such an approach reduces drive states to a complex homeostatic reflex that is responsive to multiple stimuli. Some of these stimuli are internal in response to tissue deficits; others are external (eg, the sight or smell of food) and are regulated by excitatory and inhibitory systems. Since regulation of internal states involves the autonomic nervous system and the endocrine system, we shall consider the relationship of motivational states to autonomic and neuroendocrine responses. We first examine how servo-control models have made the study of drive states amenable to biological experimentation. We then examine the regulation of these simple motivational states by factors other than tissue deficits, such as circadian rhythms, ecological constraints, and pleasure. Finally, we discuss the neural systems of the brain concerned with reward or reinforcement, an important component of motivation. These neural systems have been well delineated. Most addictive drugs, such as nicotine, alcohol, opiates, and cocaine, produce their actions by acting on or co-opting the same neural pathways that mediate positively motivated behaviors essential for survival. Drive States Are Simple Cases of Motivational States That Can Be Modeled as Servo-Control Systems Drive states can be understood by analogy with control systems, or servomechanisms , that regulate machines. While specific physiological servomechanisms have not yet been demonstrated directly, the servomechanism model permits us to organize our thinking about the complex operation of homeostasis, and makes it possible to define experimentally the physiological control of homeostasis. This approach has been most successfully applied to temperature regulation. Because body temperature can be readily measured, the mechanism regulating temperature has been studied by examining the relationship between the internal stimulus (temperature) and various external stimuli. This control system approach has been less successful when applied to more complex regulatory behaviors, such as feeding, drinking, and sex, in which the relevant internal stimuli are difficult to identify and measure. Nevertheless, at present, the control systems model is probably the best approach to analyzing even these more complex internal states. Servomechanisms maintain a controlled variable within a certain range. One way of regulating the controlled variable is to measure it by means of a feedback detector and compare the measured value with a desired value, or set point. The comparison is accomplished by an error detector, or integrator , that generates an error signal when the value of the controlled variable does not match the set point. The error signal then drives controlling elements that adjust the controlled system in the desired direction. The error signal is controlled not only by internal feedback stimuli but also by external stimuli. All examples of physiological control seem to involve both inhibitory and excitatory effects, which function together to adjust the control system (Figure 51-1). The control system used to heat a home illustrates these principles. The furnace system is the controlling element. The room temperature is the controlled variable. The home thermostat is the error detector. The setting on the thermostat is the set point. Finally, the output of the thermostat is the error signal that turns the control element on or off. Figure 51-2 This sagittal section of the human brain illustrates the hypothalamic regions concerned with heat conservation and heat dissipation. Temperature Regulation Involves Integration of Autonomic, Endocrine, and Skeletomotor Responses Temperature regulation nicely fits the model of a control system. Normal body temperature is the set point in the system of temperature regulation. The integrator and many controlling elements for temperature regulation appear to be located in the hypothalamus. Because temperature regulation requires integrated autonomic, endocrine, and skeletomotor responses, the anatomical connections of the hypothalamus make this structure well suited for this task. The feedback detectors collect information about body temperature from two main sources: peripheral temperature receptors located throughout the body (in the skin, spinal cord, and viscera) and central receptors located mainly in the hypothalamus. The detectors of temperature, both low and high, are located only in the anterior hypothalamus. The hypothalamic receptors are probably neurons whose firing rate is highly dependent on local temperature, which in turn is importantly affected by the temperature of the blood. Although the anterior hypothalamic area is involved in temperature sensing, control of body temperature appears to be regulated by separate regions of the hypothalamus. The anterior hypothalamus mediates decreases and the posterior hypothalamus (preoptic area) mediates increases in body temperature. Thus, electrical stimulation of the anterior hypothalamus causes dilation of blood vessels in the skin, panting, and a suppression of shivering, responses that decrease body temperature. In contrast, electrical stimulation of the posterior hypothalamus produces an opposing set of responses that generate or conserve heat (Figure 51-2). As with fear responses, which are evoked by electrical stimulation of the hypothalamus (Chapter 50), temperature regulatory responses evoked by electrical stimulation also include appropriate nonvoluntary responses involving the skeletomotor system. For example, stimulation of the anterior hypothalamus (preoptic area) produces panting, while stimulation of the posterior hypothalamus produces shivering. Ablation experiments corroborate the critical role of the hypothalamus in regulating temperature. Lesions of the anterior hypothalamus cause chronic hyperthermia and eliminate the major responses that normally dissipate excess heat. Lesions in the posterior hypothalamus have relatively little effect if the animal is kept at room temperature (approximately 22°C). If the animal is exposed to cold, however, it quickly becomes hypothermic because the homeostatic mechanisms fail to generate and conserve heat. Figure 51-3 Peripheral and central information on temperature is summated in the hypothalamus. Changes in either room temperature or local hypothalamic temperature alter the response rate of rats trained to press a button to receive a brief burst of cool air. When the room temperature is increased, thus presumably increasing skin temperature, the response rate increases roughly in proportion to the temperature increase (points a and b). If the temperature of the hypothalamus is also increased (by perfusing warm water through a hollow probe), the response rate reflects a summation of information on skin temperature and hypothalamic temperature (points c and d). If the skin temperature remains high enough but the hypothalamus is cooled, the response rate decreases or is suppressed altogether (point e). (From data of Corbit 1973 and Satinoff 1964.) The hypothalamus also controls endocrine responses to temperature challenges. Thus, long-term exposure to cold can enhance the release of thyroxine, which increases body heat by increasing tissue metabolism. In addition to driving appropriate autonomic, endocrine, and nonvoluntary skeletal responses, the error signal of the temperature control system can also drive voluntary behaviors that minimize the error signal. For example, a rat can be taught to press a button to receive puffs of cool air in a hot environment. After training, when the chamber is at normal room temperature, the rat will not press the button for cool air. If the anterior hypothalamus is locally warmed by perfusing it with warm water through a hollow probe, the rat will run to the cool-air button and press it repeatedly. Hypothalamic integration of peripheral and central inputs can be demonstrated by heating the environment (and thereby the skin of the animal) and concurrently cooling or heating the hypothalamus. When both the environment and hypothalamus are heated, the rat presses the cool-air button faster than when either one is heated alone. However, even in a hot environment the pressing of the button for cool air can be suppressed completely by cooling the hypothalamus (Figure 51-3). Recordings from neurons in the preoptic area and the anterior hypothalamus support the idea that the hypothalamus integrates peripheral and central information relevant to temperature regulation. Neurons in this region, called warm-sensitive neurons , increase their firing when the local hypothalamic tissue is warmed. Other neurons, called cold-sensitive neurons , respond to local cooling. The warm-sensitive neurons, in addition to responding to local warming of the brain, are usually excited by warming the skin or spinal cord and are inhibited by cooling the skin or spinal cord. The cold-sensitive neurons exhibit the opposite behavior. Thus, these neurons could integrate the thermal information from peripheral receptors with that from neurons within the brain. Furthermore, many temperature-sensitive neurons in the hypothalamus also respond to nonthermal stimuli, such as osmolarity, glucose, sex steroids, and blood pressure. In humans the set point of the temperature control system is approximately 98.6°F (37°C), although it normally varies somewhat diurnally, decreasing to a minimum during sleep. The set point can be altered by pathological states, for example by the action of pyrogens, which induce fever. Systemic pyrogens, such as the macrophage product interleukin-1, enter the brain at regions in which the blood-brain barrier is incomplete, such as the preoptic area, and act there to increase the set point. The body temperature then rises until the new set point is reached. When this occurs a part of the brain known as the antipyretic area is activated and limits the magnitude of the fever. The antipyretic area includes the septal nuclei, which are located anterior to the hypothalamic preoptic areas, near the anterior commissure. The antipyretic area is innervated by neurons that use the peptide vasopressin as transmitter. Injection of vasopressin into the septal area counteracts fever in a manner similar to that of antipyretic drugs, such as aspirin and indomethacin, suggesting that some of the effects of these drugs are mediated by the central release of vasopressin. The antipyretic action of aspirin and indomethacin is blocked by injection into the septal nuclei of a vasopressin antagonist. In fact, convulsions brought on by high fevers may in part be evoked by vasopressin released in the brain as part of the antipyretic response. The control of body temperature is a clear example of the integrative action of the hypothalamus in regulating autonomic, endocrine, and drive-state control. It illustrates how the hypothalamus operates both directly on the internal environment and indirectly, by providing information about the internal environment to higher neural systems. Figure 51-4 Animals tend to adjust their food intake to achieve a normal body weight. The plots show a schematized growth curve for a group of rats. At arrow 1 one-third of the animals were maintained on their normal diet (curve b), one-third were force-fed (curve a), and one-third were placed on a restricted diet (curve c). At arrow 2 all rats were placed on a normal (ad libitum) diet. The force-fed animals lost weight and the starved animals gained weight until the mean weight of the two groups approached that of the normal growth curve (b). (Adapted from Keesey et al. 1976.) Chapter 51 / Motivational and Addictive States Feeding Behavior Is Regulated by a Variety of Mechanisms Like temperature regulation, feeding behavior may also be analyzed as a control system, although at every level of analysis the understanding of feeding is less complete. One reason for thinking that feeding behavior is subject to a control system is that body weight seems to be regulated by a set point. Humans often maintain the same body weight for many years. Since even a small increase or decrease of daily caloric intake could eventually result in a substantial weight change, the body must be governed by feedback signals that control nutrient intake and metabolism. Control of nutrient intake is seen most clearly in animals in which body weight is altered from the set point by food deprivation or force-feeding. In both instances the animal will adjust its subsequent food intake (either up or down) until it regains a weight appropriate for its age (Figure 51-4). Animals are thus said to defend their body weight against perturbations. Whereas body temperature is remarkably similar from one individual to another, body weight varies greatly. Furthermore, the apparent set point of an individual can vary with stress, palatability of the food, exercise, and many other environmental and genetic factors. One possible explanation for this difference between regulation of temperature and body weight is that the set point for body weight can itself be changed by a variety of factors. Another possibility is that body weight is regulated by a control system that has no formal set-point mechanism but which nevertheless functions as if there were a set point (Figure 51-1B). Figure 51-5 The set point for body weight appears to be altered by lesions of the lateral hypothalamus. Three groups of rats were used in this experiment. The control group was maintained on a normal diet. On day 0 the animals of the other two groups received small lesions in the lateral hypothalamus. One of these groups had been maintained on a normal diet; the other group had been starved before the lesion and consequently had lost body weight. After the lesion all animals were given free access to food. The lesioned animals that had not been prestarved initially decreased their food intake and lost body weight, while those that were prestarved rapidly gained weight until they reached the level of the other lesioned animals. (Adapted from Keesey et al. 1976.) Dual Controlling Elements in the Hypothalamus Contribute to the Control of Food Intake Food intake is thought to be under the control of two regions in the hypothalamus: a ventromedial region and a lateral region. In 1942 Albert W. Hetherington and Stephen Walter Ranson discovered that destruction of the ventromedial nuclei of the hypothalamus produces overeating ( hyperphagia ) and severe obesity. In contrast, bilateral lesions of the lateral hypothalamus produce severe neglect of eating ( aphagia ) so that the animal dies unless force-fed. Electrical stimulation produces the opposite effects of lesions. Whereas stimulation of the ventromedial region suppresses feeding, stimulation of the lateral hypothalamus elicits feeding. These observations were originally interpreted to mean that the lateral hypothalamus contains a feeding center and the medial hypothalamus a satiety center. This conclusion was reinforced by studies showing that chemical stimulation of these parts of the hypothalamus can also alter feeding behavior. This conceptually attractive conclusion proved faulty, however, as it became clear that the brain is not organized into discrete centers that by themselves control specific functions. Rather, as with perception and action, the neural circuits mediating homeostatic functions such as feeding are distributed among several structures in the brain. The effects of lateral or medial hypothalamic lesions on feeding are thought to be due in part to dysfunctions that result from damage to other structures. Three factors are particularly important: (1) alteration of sensory information, (2) alteration of set point, and (3) interference with behavioral arousal because of damage to dopaminergic fibers of passage. First, lateral hypothalamic lesions sometimes result in sensory and motor deficits as a result of the destruction of fibers of the trigeminal system and the dopaminergic fibers of the medial forebrain bundle. The sensory loss can contribute to the loss of feeding as well as to the so-called sensory neglect seen after lateral hypothalamic lesions. Thus, a unilateral lesion of the lateral hypothalamus results in loss of orienting responses to visual, olfactory, and somatic sensory stimuli presented contralateral to the lesion. Similarly, feeding responses to food presented contralaterally are also diminished. It is not clear whether this sensory neglect is due to disruption of sensory systems or to interference with motor systems directing responses contralateral to the lesion. Altered sensory responses are also seen in animals with lesions in the region of the ventromedial nucleus. These animals have heightened responses to the aversive or attractive properties of food and other stimuli. On a normal diet they eat more than do animals without lesions. Since the reduction in eating is similar to that seen in normal animals that are made obese by force-feeding, the enhanced sensory responsiveness to food of animals with ventromedial hypothalamic lesions is, at least in part, a consequence rather than a cause of the obesity. This interpretation is supported by Stanley Schachter's finding that some obese humans with no evidence of damage to the region of the ventromedial hypothalamus are also unusually responsive to the taste of food. Second, Shypothalamic lesions may alter the set point for regulating body weight. Rats that were starved to reduce their weight before a small lesion was made in the lateral hypothalamus ate more than normal amounts and gained weight when they resumed eating, whereas the controls (nonstarved) lost weight (Figure 51-5). The starvation apparently brings the weight of these animals below the set point determined by the lateral lesion. Conversely, animals that were force-fed before ventromedial hypothalamic lesions did not overeat, which they would have done if they had not been previously force-fed. Third, lesions of the lateral hypothalamus can damage dopaminergic fibers that course from the substantia nigra to the striatum via the medial forebrain bundle as well as those that emanate from the ventral tegmental area (the mesolimbic projections) and innervate structures associated with the limbic system (the prefrontal cortex, amygdala, and nucleus accumbens; see Chapter 45). When nigrostriatal dopaminergic fibers are experimentally sectioned bilaterally below or above the level of the hypothalamus or are destroyed by a specific toxin 6-OH dopamine, animals exhibit a hypoarousal state and life-threatening aphagia similar to that observed after lateral hypothalamic lesions. The loss of dopamine does not account entirely for the lateral hypothalamus syndrome. The physiological profile and recovery of eating patterns are different after lesions of the lateral hypothalamus and depletion of dopamine, demonstrating that both the dopamine system and hypothalamic substrates contribute to the control of feeding. Lesioning of dopaminergic neurons alone or loss of the neurons of the lateral hypothalamus alone (using the excitotoxins kainic or ibotenic acid) produces less severe behavioral deficits than those seen after the classical lateral hypothalamus lesions. The combined loss of lateral hypothalamic neurons and dopaminergic fibers results in the classical syndrome by impairing both the substrate for monitoring physiological feedback and the neural systems that generate appropriate behavior. In fact, the dopamine agonist apomorphine restores eating and drinking responses to physiological challenges in rats after depletion of dopamine, but not in rats with lateral hypothalamic lesions. Below we shall examine the role of dopamine in food reward and reinforcement more generally when we consider studies of intracranial self-stimulation, the effect of dopamine-blocking drugs on learned behavior to obtain food, and the reinforcing effects of drugs of addiction. Some of the strongest evidence implicating the hypothalamus in the control of feeding comes from studies showing that a wide spectrum of transmitters produces profound alterations of feeding behavior when injected into the lateral hypothalamus and the area of the paraventricular nuclei. These studies also illustrate that different chemical systems are involved in the control of different classes of nutrients. Application of norepinephrine to the paraventricular nucleus greatly stimulates feeding; but, if given a choice, animals will eat more carbohydrate than protein or fat. In contrast, application of the peptide galanin selectively increases ingestion of fat whereas opiates enhance consumption of protein. Figure 51-6 Hypothetical model of the mechanisms that regulate energy balance in mammals. (Adapted from Hervey 1969.) Food Intake Is Controlled by Short-Term and Long-Term Cues What cues does an organism use to regulate feeding? Two main cues for hunger have been identified: short-term cues that regulate the size of individual meals and long-term cues that regulate overall body weight (Figure 51-6). Short-term cues consist primarily of chemical properties of the food that act in the mouth to stimulate feeding behavior and in the gastrointestinal system and liver to inhibit feeding. The short-term satiety signals impinge on the hypothalamus through visceral afferent pathways, communicating primarily with the lateral hypothalamic regions. The effectiveness of short-term cues is modulated by long-term signals that reflect body weight. As we shall discuss in greater detail below, one such important signal is the peptide leptin, which is secreted from fat storage cells ( adipocytes ). By means of this signal, body weight is kept reasonably constant over a broad range of activity and diet. Daily energy expenditure is remarkably consistent when expressed as a function of body size (Figure 51-7A). Body weight is also maintained at a set level by self-regulating feedback mechanisms that adjust metabolic rate when the organism drifts away from its characteristic set point (Figure 51-7B). An animal maintained on a reduced-calorie diet eventually needs less food to maintain its weight because its metabolic rate decreases. Several humoral signals are thought to be important for short-term regulation of feeding behavior. The hypothalamus has glucoreceptors that respond to blood glucose levels. This system probably stimulates feeding behavior (in contrast to autonomic responses to blood glucose) primarily during emergency states in which blood glucose falls drastically. In addition, gut hormones released during a meal may contribute to satiety. Considerable evidence for such a humoral short-term signal comes from studies of the peptide cholecystokinin. Cholecystokinin is released from the duodenum and upper intestine when amino acids and fatty acids are present in the tract. Cholecystokinin released in the gut acts on visceral afferents that affect brain stem and hypothalamic areas, which are themselves sensitive to cholecystokinin. Injection into the ventricles or specifically into the paraventricular nucleus of small quantities of cholecystokinin and several other peptides (including neurotensin, calcitonin, and glucagon) also inhibits feeding. Therefore, chole-cystokinin released as a neuropeptide in the brain may also inhibit feeding, independently of its release from the gut Cholecystokinin is an example of a hormone or neuromodulator that has independent central and peripheral actions that are functionally related. Other examples include luteinizing hormone-releasing hormone (sexual behavior), adrenocorticotropin (stress and avoidance behavior), and angiotensin (responses to hemorrhage). The use of the same chemical signal for related central and peripheral functions is widespread in both vertebrates and invertebrates. Certain invertebrates, such as the sea slug Aplysia , have specific serotonergic neurons that both enhance feeding responses (by acting directly on muscles involved in consuming food) and promote arousal (by enhancing the excitability of central motor neurons that innervate these same muscles). The brain integrates multiple peripheral and neural signals to control the regulation of energy homeostasis, maintaining a balance between food intake and energy expenditure. Peripheral factors indicative of long-term energy status are produced by adipose tissue (leptin, adiponectin) and the pancreas (insulin), whereas the acute hunger signal ghrelin (produced in the stomach), and satiety signals such as the gut hormones peptide YY(3–36) (PYY(3–36)), pancreatic polypeptide (PP), amylin and oxyntomodulin (OXM) indicate near-term energy status. The incretin hormones glucagon-like peptide-1 (GLP-1), glucose-dependent insulinotropic peptide (GIP), and potentially OXM improve the response of the endocrine pancreas to absorbed nutrients. Further feedback is provided by nutrient receptors in the upper small bowel, and neural signals indicating distention of the stomach's stretch receptors, which are primarily conveyed by the vagal afferent and sympathetic nerves to the nucleus of the solitary tract (NTS) in the brain stem. The arcuate nucleus (ARC) of the hypothalamus, which is located between the third ventricle and the median eminence, integrates these energy homeostatic feedback mechanisms. It accesses the short- and long-term hormonal and nutrient signals from the periphery via semi-permeable capillaries in the underlying median eminence, and receives neuronal feedback from the NTS. These collated signals act on two distinct subsets of neurons that control food intake in the ARC, which act as an accelerator and a brake respectively. The first subset co-expresses the orexigenic (appetite stimulating) agouti-related peptide (AgRP) and neuropeptide Y (NPY) neurotransmitters, acting as an accelerator in the brain to stimulate feeding. The other neuronal population releases the anorexigenic cocaine- and amphetamine -regulated transcript (CART) and pro-opiomelanocortin (POMC) neurotransmitters, both of which inhibit feeding. Both neuronal populations innervate the paraventricular nucleus (PVN), which, in turn, sends signals to other areas of the brain. These include hypothalamic areas such as the ventromedial nucleus, dorsomedial nucleus and the lateral hypothalamic area, which modulate this control system. Neural brain circuits integrate information from the NTS and multiple hypothalamic nuclei to regulate overall body homeostasis. Drinking Is Regulated by Tissue Osmolality and Vascular Volume The hypothalamus regulates water balance by its control of hormones, such as antidiuretic hormone. The hypothalamus also regulates aspects of drinking behavior. Unlike feeding, where intake is critical, the amount of water taken in is relatively unimportant as long as the minimum requirement is met. Within broad limits, excess intake is readily eliminated by the kidney. Nevertheless, a set point, or ideal amount of water intake, appears to exist, since too much or too little drinking represents inefficient behavior. If an animal takes in too little liquid at one time, it must soon interrupt other activities and resume its liquid intake to avoid underhydration. Likewise, drinking a large amount at one time results in unneeded time spent drinking as well as urinating to eliminate the excess fluid. Drinking is controlled by two main physiological variables: tissue osmolality and vascular (fluid) volume. This has led Alan Epstein to propose that the principal inputs controlling thirst arise when both physiological variables are depleted ( double depletion hypothesis ). Signals related to the variables reach mechanisms in the brain that control drinking either through afferent fibers from peripheral receptors or by humoral actions on receptors in the brain itself. These inputs control the physiological mechanisms of water conservation in such a way that fluid intake is coordinated with the control of fluid loss so as to maintain water balance. Thus, the hypothalamus integrates hormonal and osmotic cues sensing cell volume and the state of the extracellular space. The volume of water in the intracellular compartment is normally approximately double that of the extracellular space. This delicate balance is determined by the osmotic equilibrium between the compartments, which in turn is determined by extracellular sodium. The control of sodium is therefore a key element in the homeostatic mechanism regulating thirst. The two drives, thirst and salt appetite, appear to be handled by separate but interrelated mechanisms. Drinking also can be controlled by dryness of the tongue. Hyperthermia, detected at least in part by thermosensitive neurons in the anterior hypothalamus, may also contribute to thirst. The feedback signals for water regulation derive from many sources. Osmotic stimuli can act directly on osmoreceptor cells (or receptors that sense the level of Na+), probably neurons, in the hypothalamus. The feedback signals for vascular volume are located in the low-pressure side of the circulation—the right atrium and adjacent walls of the great veins—and large volume changes may also affect arterial baroreceptors in the aortic arch and carotid sinus. Signals from these sources can initiate drinking. Low blood volume, as well as other conditions that decrease body sodium, also results in increased renin secretion from the kidney. Renin, a proteolytic enzyme, cleaves plasma angiotensinogen into angiotensin I, which is then hydrolyzed to the highly active octapeptide angiotensin II. Angiotensin II elicits drinking as well as three other physiological actions that compensate for water loss: vasoconstriction, increased release of aldosterone, and increased release of vasopressin For blood-borne angiotensin to affect behavior it must pass through the blood-brain barrier at specialized regions of the brain. The subfornical organ is a small neuronal structure that extends into the third ventricle and has fenestrated capillaries that readily permit the passage of blood-borne molecules (see Appendix B on the blood-brain barrier). The subfornical organ is sensitive to low concentrations of angiotensin II in the blood, and this information is conveyed to the hypothalamus by a neural pathway between the subfornical organ and the preoptic area. Neurons in this pathway in turn use an angiotensin-like molecule as a transmitter. Thus the same molecule regulates drinking by functioning as a hormone and a neurotransmitter. The preoptic area also receives information from baroreceptors throughout the body. This information is conveyed to various brain structures that initiate a search for water and drinking. Information from baroreceptors is also sent to the paraventricular nucleus, which mediates the release of vasopressin, which in turn regulates water retention. The signals that terminate drinking are less well understood than those that initiate drinking. It is clear, however, that the termination signal is not always merely the absence of the initiating signal. This principle holds for many examples of physiological and behavioral regulation, including feeding. Thus, for example, drinking initiated by low vascular fluid volume (eg, after severe hemorrhage) terminates well before the deficit is rectified. This is highly adaptive since it prevents water intoxication from excessive dilution of extracellular fluids and seems to prevent overhydration that could result from absorption of fluid in the alimentary system long after the cessation of drinking. Dual Controlling Elements in the Hypothalamus Contribute to the Control of Food Intake <ul><li>Lateral Hypothalamus </li></ul><ul><ul><li>Feeding center </li></ul></ul><ul><li>Medial Hypothalamus </li></ul><ul><ul><li>Satiety center </li></ul></ul> Food Intake Is Controlled by Short-Term and Long-Term Cues
http://www.slideshare.net/drpsdeb/11b-autonaumic-ns
4.0625
Tidal friction, in astronomy, strain produced in a celestial body (such as the Earth or Moon) that undergoes cyclic variations in gravitational attraction as it orbits, or is orbited by, a second body. Friction occurs between water tides and sea bottoms, particularly where the sea is relatively shallow, or between parts of the solid crust of planet or satellite that move against each other. Tidal friction on the Earth prevents the tidal bulge, which is raised in Earth’s seas and crust by the Moon’s pull, from staying directly under the Moon. Instead, the bulge is carried out from directly under the Moon by the rotation of the Earth, which spins almost 30 times for every time the Moon revolves in its orbit. The mutual attraction between the Moon and the material in the bulge tends to accelerate the Moon in its orbit, thereby moving the Moon farther from Earth by about three centimetres (1.2 inches) per year, and to slow Earth’s daily rotation by a small fraction of a second per year. Millions of years from now these effects may cause the Earth to keep the same face always turned to a distant Moon and to rotate once in a day about 50 times longer than the present one and equal to the month of that time. This condition probably will not be stable, due to the tidal effects of the Sun on the Earth–Moon system. That the Moon keeps the same part of its surface always turned toward Earth is attributed to the past effects of tidal friction in the Moon. The theory of tidal friction was first developed mathematically after 1879 by the English astronomer George Darwin (1845–1912), son of the naturalist Charles Darwin.
http://www.britannica.com/topic/tidal-friction
4.125
The resource has been added to your collection Worksheets and activities for teaching and reinforcing the use of nouns for English, Spanish, German and ESL students. This resource was reviewed using the Curriki Review rubric and received an overall Curriki Review System rating of 3, as of 2013-01-20. This collection contains 9 grammar resources, ranging from a worksheet identifying parts of speech, to a booklet of handouts for ESL students about nouns, to worksheets in PPT formats for whole class instruction, to a beginner level German language learning handout about the gender and number of nouns, to worksheets for ESL students about English plural spelling changes and nouns, to lessons for first graders learning Spanish words about family and rules for masculine and feminine endings, to some pull down menu multiple choice noun worksheets and to an external link to a variety of grammar worksheets. This rich collection will help students reinforce their understanding about nouns. Not Rated Yet.
http://www.curriki.org/oer/Noun-Worksheets/
4.25
EARTH is starting to crumble under the strain of climate change. Over the last decade, rock avalanches and landslides have become more common in high mountain ranges, apparently coinciding with the increase in exceptionally warm periods (see “Early signs”). The collapses are triggered by melting glaciers and permafrost, which remove the glue that holds steep mountain slopes together. Worse may be to come. Thinning glaciers on volcanoes could destabilise vast chunks of their summit cones, triggering mega-landslides capable of flattening cities such as Seattle and devastating local infrastructure. For Earth this phenomenon is nothing new, but the last time it happened, few humans were around to witness it. Several studies have shown that around 10,000 years ago, as the planet came out of the last ice age, vast portions of volcanic summit cones collapsed, leading to enormous landslides. To assess the risk of this happening again, Daniel Tormey of ENTRIX, an environmental consultancy based in Los Angeles, studied a huge landslide that occurred 11,000 years ago on Planchón-Peteroa. He focused on this glaciated volcano in Chile because its altitude and latitude make it likely to feel the effects of climate change before others. “Around one-third of the volcanic cone collapsed,” Tormey says. Ten billion cubic metres of rock crashed down the mountain and smothered 370 square kilometres of land, travelling 95 kilometres in total (Global and Planetary Change, DOI: 10.1016/j.gloplacha.2010.08.003). Studies have suggested that intense rain cannot provide the lubrication needed for this to happen, so Tormey concludes that glacier melt must have been to blame. With global temperatures on a steady rise, Tormey is concerned that history will repeat itself on volcanoes all over the world. He thinks that many volcanoes in temperate zones could be at risk, including in the Ring of Fire – the horseshoe of volcanoes that surrounds the Pacific Ocean (see map). “There are far more human settlements and activities near the slopes of glaciated active volcanoes today than there were 10,000 years ago, so the effects could be catastrophic,” he says. The first volcanoes to go will most likely be in the Andes, where temperatures are rising fastest as a result of global warming. Any movement here could be an early sign of trouble to come elsewhere. David Pyle, a volcanologist at the University of Oxford, agrees. “This is a real risk and a particularly serious hazard along the Andes,” he says. Meanwhile, ongoing studies by Bill McGuire of University College London and Rachel Lowe at the University of Exeter, UK, are showing that non-glaciated volcanoes could also be at greater risk of catastrophic collapse if climate change increases rainfall. “We have found that 39 cities with populations greater than 100,000 are situated within 100 kilometres of a volcano that has collapsed in the past and which may, therefore, be capable of collapsing in the future,” says McGuire. Mount Cook (Aoraki), New Zealand Just after midnight on 14 December 1991, 12 million cubic metres of rock and ice peeled away from the summit of New Zealand’s highest mountain. The landslide travelled 7.5 kilometres and narrowly missed slumbering hikers in an alpine hut. It occurred after an exceptionally warm week, when temperatures were 8.5 °C above average, and reduced the height of the mountain by around 10 metres. Mount Dzhimarai-Khokh, Russia More than 100 people were killed on 20 September 2002 when their villages were swept away after part of the peak, in the north Caucasus mountains, collapsed. Over 100 million cubic metres of debris travelled 20 kilometres. Warming permafrost is thought to have been partly to blame. Mount Rosa, Italy Following an unusual spring heatwave across Europe in 2007, the Alpine mountain suffered a spectacular rock avalanche, in which 300,000 cubic metres of rock fell, landing in a dry seasonal lake. Had the lake contained water, the avalanche would have generated a massive outpouring, with catastrophic consequences for the village of Macugnaga downstream. More on these topics:
https://www.newscientist.com/article/mg20827825.100-a-warming-world-could-leave-cities-flattened?full=true
4.03125
Beacon Lesson Plan Library Meet the Five Food Groups Bay District Schools This lesson is designed to invite first grade students to identify the five food groups and the foods within each group as shown on the food pyramid. The student describes a wide variety of classification schemes and patterns related to physical characteristics and sensory attributes, such as rhythm, sound, shapes, colors, numbers, similar objects, similar events. The student classifies food and food combinations according to the Food Guide Pyramid. -Books such as Loreen Leedy. [The Edible Pyramid: Good Eating Every Day]. Holiday House, 1994. Sharmat, Mitchell. [Gregory the Terrible Eater]. Scholastic, l980. -Manipulatives of foods from the food pyramid (See Preparations) -Paper and pencil -Food pictures or empty food packages representing each of the five food groups -Chart paper with the Food Guide Pyramid drawn on it -Student copies of a blank Food Guide Pyramid 1. Research background information on the five food groups and why there are five instead of four. Research key nutrients for each group. Note that nutrient-rich foods are now classified into five food groups: milk, meat, vegetable, fruit, and grain. 2. Gather manipulatives of foods for students to handle. Lakeshore Learning Materials (www.lakeshorelearning.com) has a Food and Nutrition Theme Box (item # LA452) which contains various manipulative food items. 3. Cut out pictures of food from magazines. 4. Acquire magazines from which students can also cut out pictures of food. 5. Prepare a chart with the Food Guide Pyramid for the lesson. 6. Prepare individual copies of the Food Guide Pyramid to distribute to students. 1. Introduce lesson by brainstorming with students what they know about the five food groups and the Food Guide Pyramid using the K-W-L procedure. Ask students what they (K)now about the five food groups and the food pyramid. Ask students what they (W)ant to learn about the five food groups and the Food Guide Pyramid. Read [The Edible Pyramid: Good Eating Every Day] to the entire class. 2. Explain: “You may have learned that there are four food groups, but there has been a change. Now, there are five food groups.” 3. Introduce pictures of the five food groups, what foods are included in them, and the Food Guide Pyramid. Let the students handle the manipulative food items. 4. Students take turns placing food items in the Food Guide Pyramid. With markers, draw a Food Guide Pyramid on chart paper. Students then choose pictures to place (glue) on the Food Guide Pyramid. 5. Give students paper with a blank Food Guide Pyramid and encourage them to cut out pictures from magazines to place on their own pyramid. 6. Discuss: “What do foods from the milk group have in common? Why don't eggs belong to the milk group? What characteristics do foods from each group have in common?” 7.Conclude the activity by reading one of the suggested books such as [Gregory the Terrible Eater]. Ask students, “What have you (L)earned about the five food groups and the Food Guide Pyramid?” Students record reponses in their journals. 1. Students cut 10 pictures of food from magazines and place 8 out of 10 in the correct places within their Food Guide Pyramid. 2. Students categorize and sort out foods into the five food groups. 3. Students recite the acronym to teacher. (See Extensions) 4. Students write in their journals explaining their own Food Guide Pyramid and how they classified foods on it. After the lesson, students learn an acronym to help them remember the five food groups.
http://www.beaconlearningcenter.com/lessons/lesson.asp?ID=169
4.125
Technology Helps Autistic Children with Social Skills A new research project suggests virtual worlds can help autistic children develop social skills beyond their anticipated levels. In the study, called the Echoes Project, scientists developed an interactive environment that uses multi-touch screen technology to project scenarios to children. The technology allows researchers to study a child’s actions to new situations in real time. During sessions in the virtual environment, primary school children experiment with different social scenarios, allowing the researchers to compare their reactions with those they display in real-world situations. “Discussions of the data with teachers suggest a fascinating possibility,” said project leader Kaska Porayska-Pomsta, Ph.D. “Learning environments such as Echoes may allow some children to exceed their potential, behaving and achieving in ways that even teachers who knew them well could not have anticipated.” “A teacher observing a child interacting in such a virtual environment may gain access to a range of behaviors from individual children that would otherwise be difficult or impossible to observe in a classroom,” she added. Early findings from this research show that practice with various scenarios has improved the quality of the interaction for some of the children. Researchers believe the virtual environment and an increased ability to manage their own behavior enables a child to concentrate on following a virtual character’s gaze or to focus on a pointing gesture, thus developing the skills vital for good communication and effective learning. The findings could prove particularly useful in helping children with autism to develop skills they normally find difficult. Porayska-Pomsta said: “Since autistic children have a particular affinity with computers, our research shows it may be possible to use digital technology to help develop their social skills. “The beauty of it is that there are no real-world consequences, so children can afford to experiment with different social scenarios without real-world risks,” she added. “In the longer term, virtual platforms such as the ones developed in the Echoes project could help young children to realize their potential in new and unexpected ways,” Porayska-Pomsta said. Nauert PhD, R. (2015). Technology Helps Autistic Children with Social Skills. Psych Central. Retrieved on February 12, 2016, from http://psychcentral.com/news/2011/10/24/technology-helps-autistic-children-with-social-skills/30648.html
http://psychcentral.com/news/2011/10/24/technology-helps-autistic-children-with-social-skills/30648.html
4.21875
This resource is a set of instructional materials developed to help beginning physics students build a solid understanding of vector algebra. It contains two lecture presentations in PDF format and a companion assessment. It gives an overview of terminology, vector notation, and a variety of methods for solving problems relating to vectors. One of the authors' goals is to help students differentiate between the uses of vectors in mathematics vs. physics. This resource is part of a collection developed by the NSF-funded Mathematics Across the Community College Curriculum (MAC 3). This resource is part of a Physics Front Topical Unit. Topic: Kinematics: The Physics of Motion Unit Title: Vectors This is an exemplary set of Power Point materials for teachers to introduce vector basics, including vector addition/subtraction and how to calculate vector components. See Assessments below for a companion unit test. All may be freely downloaded. To read about the underlying pedagogy employed by the authors, go to Reference Material below and click on Bridging the Vector Calculus Gap. %0 Electronic Source %A Friesen, Larry %A Gillis, Anne %D February 4, 2008 %T MAC3: Vectors %V 2016 %N 12 February 2016 %8 February 4, 2008 %9 application/pdf %U http://www.mac3.matyc.org/vectors/vectors_main.html Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
http://www.thephysicsfront.org/items/detail.cfm?ID=8391
4.25
|This article needs additional citations for verification. (February 2016)| In plate tectonics, a divergent boundary or divergent plate boundary (also known as a constructive boundary or an extensional boundary) is a linear feature that exists between two tectonic plates that are moving away from each other. Divergent boundaries within continents initially produce rifts which eventually become rift valleys. Most active divergent plate boundaries occur between oceanic plates and exist as mid-oceanic ridges. Divergent boundaries also form volcanic islands which occur when the plates move apart to produce gaps which molten lava rises to fill. Current research indicates that complex convection within the Earth's mantle allows material to rise to the base of the lithosphere beneath each divergent plate boundary. This supplies the area with vast amounts of heat and a reduction in pressure that melts rock from the asthenosphere (or upper mantle) beneath the rift area forming large flood basalt or lava flows. Each eruption occurs in only a part of the plate boundary at any one time, but when it does occur, it fills in the opening gap as the two opposing plates move away from each other. Over millions of years, tectonic plates may move many hundreds of kilometers away from both sides of a divergent plate boundary. Because of this, rocks closest to a boundary are younger than rocks further away on the same plate. At divergent boundaries, two plates move apart from each other and the space that this creates is filled with new crustal material sourced from molten magma that forms below. The origin of new divergent boundaries at triple junctions is sometimes thought to be associated with the phenomenon known as hotspots. Here, exceedingly large convective cells bring very large quantities of hot asthenospheric material near the surface and the kinetic energy is thought to be sufficient to break apart the lithosphere. The hot spot which may have initiated the Mid-Atlantic Ridge system currently underlies Iceland which is widening at a rate of a few centimeters per year. Divergent boundaries are typified in the oceanic lithosphere by the rifts of the oceanic ridge system, including the Mid-Atlantic Ridge and the East Pacific Rise, and in the continental lithosphere by rift valleys such as the famous East African Great Rift Valley. Divergent boundaries can create massive fault zones in the oceanic ridge system. Spreading is generally not uniform, so where spreading rates of adjacent ridge blocks are different, massive transform faults occur. These are the fracture zones, many bearing names, that are a major source of submarine earthquakes. A sea floor map will show a rather strange pattern of blocky structures that are separated by linear features perpendicular to the ridge axis. If one views the sea floor between the fracture zones as conveyor belts carrying the ridge on each side of the rift away from the spreading center the action becomes clear. Crest depths of the old ridges, parallel to the current spreading center, will be older and deeper... (from thermal contraction and subsidence). It is at mid-ocean ridges that one of the key pieces of evidence forcing acceptance of the seafloor spreading hypothesis was found. Airborne geomagnetic surveys showed a strange pattern of symmetrical magnetic reversals on opposite sides of ridge centers. The pattern was far too regular to be coincidental as the widths of the opposing bands were too closely matched. Scientists had been studying polar reversals and the link was made by Lawrence W. Morley, Frederick John Vine and Drummond Hoyle Matthews in the Morley–Vine–Matthews hypothesis. The magnetic banding directly corresponds with the Earth's polar reversals. This was confirmed by measuring the ages of the rocks within each band. The banding furnishes a map in time and space of both spreading rate and polar reversals. - Mid-Atlantic Ridge - Red Sea Rift - Baikal Rift Zone - East African Rift - East Pacific Rise - Gakkel Ridge - Galapagos Rise - Explorer Ridge - Juan de Fuca Ridge - Pacific-Antarctic Ridge - West Antarctic Rift - Great Rift Valley
https://en.wikipedia.org/wiki/Divergent_boundary
4.15625
The Little Red Hen is a classic story for nearly all adults, and many children. Here it is retold and enhanced in order to provide a framework for illustrating and reviewing the concepts of productive resources and incentives. After reading the story, students will categorize resources into land, labor, capital and entrepreneurship and be able to identify what future incentives the dog, the cat and the mouse will have to help the little hen in her work. Students will have the opportunity to explore bread making. In this lesson you will be taking on the role of an an investigative reporter to solve the Amazing Farmer Mystery. The goal will be to use seven clues provided throughout the lesson in order to figure out how so few farmers can produce enough food and fiber for the nation. In World War II pennies were made of steel and zinc instead of copper and women were working at jobs that men had always been hired to do. Why? Because during war times, scarcity forces many things to change! The following lessons come from the Council for Economic Education's library of publications. Clicking the publication title or image will take you to the Council for Economic Education Store for more detailed information. Designed primarily for elementary and middle school students, each of the 15 lessons in this guide introduces an economics concept through activities with modeling clay. 10 out of 17 lessons from this publication relate to this EconEdLink lesson. This publication contains complete instructions for teaching the lessons in Choices and Changes, Grades 5-6. The Choices and Changes series is designed to help students understand how the U.S. economy works and their roles in the economy as consumers, savers and workers. 9 out of 15 lessons from this publication relate to this EconEdLink lesson. This publication contains fourteen lessons that use a unique blend of games, simulations, and role playing to illustrate economics in a way elementary students will enjoy. 5 out of 16 lessons from this publication relate to this EconEdLink lesson.
http://www.econedlink.org/economic-standards/EconEdLink-related-publications.php?lid=229
4.0625
Big Picture TV Video length 5:04 min.Learn more about Teaching Climate Literacy and Energy Awareness» See how this Video supports the Next Generation Science Standards» High School: 6 Disciplinary Core Ideas, 2 Cross Cutting Concepts About Teaching Climate Literacy Other materials addressing 2b Other materials addressing 2d Other materials addressing 2f About Teaching Climate Literacy Other materials addressing Climate change has consequences Notes From Our Reviewers The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness. Read what our review team had to say about this resource below or learn more about how CLEAN reviews teaching materials Teaching Tips | Science | Pedagogy | - Use as a resource after teaching about the carbon cycle and greenhouse effect. - Run video through once, and then restart it to eliminate commercial for classroom use. - Educator may want to use accompanying visual examples of ocean acidification, ice albedo, and water vapor as a greenhouse gas. About the Science - A British scientist, who has been involved with IPCC AR4, explains amplifying feedback. The primary focus is on how water vapor functions as a greenhouse gas, but he also cites other examples of climate feedbacks - ice albedo and ocean acidification. - Passed initial science review - expert science review pending. About the Pedagogy - Good and accessible explanation of what a feedback mechanism is/does and the difference between positive and negative feedback. - This is strictly an interview with a scientist - no visuals to illustrate. Next Generation Science Standards See how this Video supports: Disciplinary Core Ideas: 6 HS-ESS1.B2:Cyclical changes in the shape of Earth’s orbit around the sun, together with changes in the tilt of the planet’s axis of rotation, both occurring over hundreds of thousands of years, have altered the intensity and distribution of sunlight falling on the earth. These phenomena cause a cycle of ice ages and other gradual climate changes. HS-ESS2.A1:Earth’s systems, being dynamic and interacting, cause feedback effects that can increase or decrease the original changes. HS-ESS2.D1:The foundation for Earth’s global climate systems is the electromagnetic radiation from the sun, as well as its reflection, absorption, storage, and redistribution among the atmosphere, ocean, and land systems, and this energy’s re-radiation into space. HS-ESS2.D2:Gradual atmospheric changes were due to plants and other organisms that captured carbon dioxide and released oxygen. HS-ESS2.D3:Changes in the atmosphere due to human activity have increased carbon dioxide concentrations and thus affect climate. HS-ESS2.E1:The many dynamic and delicate feedbacks between the biosphere and other Earth systems cause a continual co-evolution of Earth’s surface and the life that exists on it. Cross Cutting Concepts: 2 HS-C4.2:When investigating or describing a system, the boundaries and initial conditions of the system need to be defined and their inputs and outputs analyzed and described using models. HS-C4.3:Models (e.g., physical, mathematical, computer models) can be used to simulate systems and interactions—including energy, matter, and information flows—within and between systems at different scales.
http://cleanet.org/resources/43159.html
4.03125
Artificial Neural Networks/Feed-Forward Networks Feed-forward neural networks are the simplest form of ANN. Shown below, a feed-forward neural net contains only forward paths. A Multilayer Perceptron (MLP) is an example of feed-forward neural networks. The following figure below show a feed-forward networks with four hidden layers. In a feed-forward system PE are arranged into distinct layers with each layer receiving input from the previous layer and outputting to the next layer. There is no feedback. This means that signals from one layer are not transmitted to a previous layer. This can be stated mathematically as: Weights of direct feedback paths, from a neuron to itself, are zero. Weights from a neuron to a neuron in a previous layer are also zero. Notice that weights for the forward paths may also be zero depending on the specific network architecture, but they do not need to be. A network without all possible forward paths is known as a sparsely connected network, or a non-fully connected network. The percentage of available connections that are utilized is known as the connectivity of the network. The weights from each neuron in layer l - 1 to the neurons in layer l are arranged into a matrix wl. Each column corresponds to a neuron in l - 1, and each row corresponds to a neuron in l. The input signal from l - 1 to l is the vector xl. If ρl is a vector of activation functions [σ1 σ2 … σn] that acts on each row of input and bl is an arbitrary offset vector (for generalization) then the total output of layer l is given as: Two layers of output can be calculated by substituting the output from the first layer into the input of the second layer: This method can be continued to calculate the output of a network with an arbitrary number of layers. Notice that as the number of layers increases, so does the complexity of this calculation. Sufficiently large neural networks can quickly become too complex for direct mathematical analysis.
https://en.wikibooks.org/wiki/Artificial_Neural_Networks/Feed-Forward_Networks
4
The sun flings out solar wind particles in much the same manner as a garden sprinkler throws out water droplets. The artist's drawing of the solar wind flow was provided courtesy of NASA. The Spiral of the IMF The solar wind is formed as the Sun's top layer blows off into space, carrying magnetic fields still attached to the Sun. Gusts form in the solar wind associated with violent events on the Sun. Particles appear to flow into space as if they are spiraling out from the Sun, as shown in this figure. The figure shows what is referred to as the "spiral angle" of the IMF (interplanetary magnetic field). For a planet to be affected by a blob of material being ejected by the Sun, the planet must be in the path of the blob. Shop Windows to the Universe Science Store! Our online store on science education, classroom activities in The Earth Scientist specimens, and educational games You might also be interested in: The force of magnetism causes material to point along the direction the magnetic force points. As shown in the diagram to the left, the force of magnetism is illustrated by lines, which represent the force....more For a planet to be affected by a blob of material being ejected by the sun, the planet must be in the path of the blob, as shown in this picture. The Earth and its magnetosphere are shown in the bottom...more AU stands for Astronomical Units. It is a useful way to measure the distances in interplanetary space. It is the distance between the Earth and the Sun, which is about 93 million miles. For reference,...more The solar wind is formed as the Sun's top layer blows off into space, carrying magnetic fields still attached to the Sun. Gusts form in the solar wind associated with violent events on the Sun. Particles...more The aurora we are most familiar with is the polar aurora. This is what people are talking about when they say the northern or southern lights. But there are other less-known aurora, such as SAR arcs....more This figure shows the effect of the aurora on the atmosphere. When FAC's enter the atmosphere and create the aurora, they heat the atmosphere suddenly and abruptly. This creates an impulse which travels...more This picture shows the flowing of particles into and out of the auroral zone, as Field-Aligned currents (FAC's) take at short-cut through the atmosphere. Some of the particles entering the auroral zone...more
https://www.windows2universe.org/glossary/IMF_spiral.html
4
|Read below for Marceau's amazing story!| When I ask students to name someone famous and the first reply I hear is "Kim Kardashian," I die just a little bit inside. Students don't seem to have an understanding of, or appreciation for, the lives of great men and women who changed the course of history. But biography picture books can help to remedy that. I finally asked, "And did you include why that quote was so important, considering the person who said it?" Her reply: "Well, I had heard of him, but I didn't really know who he was." Regardless of what some might have us believe (the PARCC assessment comes to mind), historical context does, in fact, matter when examining any piece of text, and history is the product of those who made it. Students therefore need knowledge of heroes of history. The Tween Tribune article "It's Even Too Cold for Polar Bears!", for example, was summed up as follows: After some independent practice with longer articles (requiring even greater ability to discern important facts), we were ready to move on to trade books. You may want to follow along on the assignment guidesheet which you're welcome to download in pdf (or Word) and be sure to grab the blank sheet as well (also available as a Word doc). You'll notice that the instructional steps below differ somewhat from those given to students for their own work. In their notebooks, students jotted down a list of the 5Ws and 1H (Who, What, Where, When, Why, How) and were asked to listen for those facts as I read the book aloud. I read the majority of the book, stopping to monitor understanding and also to ask if any of our facts had been discovered. By story's end we had What: painted pictures that weren't beautiful Where: New York City When: early 1900s Why: to show emotions and power How: showing scenes of everyday city life Students knew that this was coming. What textual evidence backed up what we just stated? We found several sentences which might work, and finally settled on just a snippet of one quote, which we placed into a sentence that included both the author and book: But then I asked, "So what? Why did that matter?" And here's where students begin to see the light. Those people from history who changed the way others think, believe, or act tend to be those worth remembering. In the case of George Bellows, he and other students of Robert Henri went against the traditional belief that the artist's role was to paint what was beautiful. This led us to construct an opposing viewpoint statement to precede the summary sentence we had already drafted: Armed with this model, students jotted down the sentence order in their notebooks as a quick reference: II. 5Ws and 1H III. Textual Evidence I was surprised by students' success with the process. While some, as expected, followed the Bellows model precisely, simply swapping out details as needed, others departed from the model. A couple of students tried switching sentence orders when writing summaries of their second books, while others tried different grammatical structures while maintaining the sentence order we had established. One student, not thrilled when handed Marcel Marceau: Master of Mime, was amazed to learn that this entertainer played a major role in the French Resistance, and led many Jewish children to safety. His paragraph, which he knew fell far short of paying homage to this unsung hero, reads: Most surprising to many students was how much they enjoyed reading about people they had never even heard of (many students had already made plans for the next book they wanted to read). The skepticism I witnessed on the first day when distributing books was replaced with enthusiasm by day two of the assignment. And since then, students have been asking to do the assignment again, and many have naturally been begging to read biographies of their own choosing. In my next post I'll share some possible extension activities, as well as some of the more popular titles which students enjoyed.
http://teachwithpicturebooks.blogspot.com/2014/02/heroes-of-history.html
4.25
Voice or voicing is a term used in phonetics and phonology to characterize speech sounds, with sounds described as either voiceless (unvoiced) or voiced. Voiced and Voiceless Consonants. One problem that many students face in pronunciation is whether a consonant is voiced or voiceless. This guide should help ... In phonetics, a voiced consonant is a consonant which is pronounced with the vibration of the vocal cords. For example, the sound [z] is a voiced consonant, while the ... Consonants: voiced and unvoiced . Many consonant sounds come in pairs. For example, P and B are produced in the same place in the mouth with the tongue in the same ... C o n s o n a n t s. The following table displays and describes the different IPA consonants. Click on a symbol to hear an audio clip. (Note: The audio clips may not ... This discovery activity can be used to help learners notice the difference between voiced and unvoiced consonants. Begin by asking learners what noise a bee makes. (Redirected from Aspirated voiced consonant) Jump to: navigation, search. This article needs additional citations for verification. Please ... Voiced definition, having a voice of a specified kind (usually used in combination): ... in English (b) is a voiced consonant Compare voiceless. voice / vɔɪs / PHONOLOGY: CONSONANTS. All consonants may be classified as either voiced or voiceless. In articulating a voiced consonant, the vocal cords are vibrating. 301 Moved Permanently. nginx
https://www.search.com/reference/Voiced_consonant
4.1875
|By: Bob Preville| Every electrical technician knows the difference between DC (Direct Current) and AC (Alternating Current). Every electrical technician also realizes the importance of taking accurate current measurements to protect conductors from exceeding their insulators' ability to withstand heat or assuring devices under power work properly. However, does every electrical technician realize that electrical current measurements aren't always what they appear to be? Direct Current (DC) is straightforward. When we use a multimeter to measure direct current, it is what it is. However, the plot thickens when we are dealing with Alternating Current (AC). AC current travels back and forth down a conductor and can best be described in graphical format. The most common graphical description of AC current is a sine wave. Because the amplitude of the sine wave continuously changes over the wave period (one complete cycle), at any given point in time, a current measurement would not be the same. Therefore, how do we accurately measure AC Current? One method to measure AC current would be take current measurements at increments across one complete cycle and average them together. This would give us an average value of the current. If the current is a perfect sine wave, mathematically, the average value is always 0.636 times the value of the peak amplitude. Another method to measure current is based on the current's ability to perform work when applied to a resistive load. The laws of physics tell us that when current passes through a resistive load, it dissipates energy in the form of heat, mechanical motion, radiation or other forms of energy. If the resistive load is a heating element and the resistive load stays constant, then the laws of physics tell us that the heat produced is directly proportionate to the current passing through the load. Therefore, if we measure the heat, we will know the current. Mathematically, the relationship between heat and current is such that the heat produced is proportional to the square of the current applied to a resistance. (Power or Heat) = (Current) ^2 * (Resistance) If the current is continuously changing, as in AC current, the heat produced is proportional to the average (or mean) of the square of the current applied to a resistance: (Power or Heat) = Average [ (Current) ^2 * (Resistance) ] Using algebra, the above formula can be rewritten to read: Current = Square Root [ (Power or Heat) / (Resistance) ] AND this is called the Root Mean Square Current or RMS Current. For AC currents that are graphically represented by a sine wave, the RMS current will always be 0.707 times the peak current. With that said, we can calculate current by multiplying peak measurements by 0.707 if the current is a perfect sine wave. However, perfect sine waves are rare in most commercial and industrial applications. This is because resistive loads in commercial applications are not linear which results in unpredictable or variable current requirements. In order to get a True RMS measurement, we can measure the heat dissipated by a constant resistive load and perform the above calculations. The result is a True RMS measurement. Now that we got all the technical discussion out of the way, which is the best method to calculate current? Should we 1.) measure a current average 2.) multiply current peaks by 0.707 to get an RMS current, or 3.) measure the heat from a resistor and calculate a True RMS current value? Although Global Test Supply sells multimeters that can calculate current using any of the above methods, the most accurate way to calculate current in my opinion is a True RMS method. Average current values often are 40% less than True RMS values and that could mean the difference between blown circuit breakers, malfunctioning motors, or worst case, potential fire hazards. True RMS multimeters only cost about 20-30% more than the alternative. How much is an accurate current reading worth to you? Streetdirectory.com, ranked Number 1 Travel Guide in Singapore provides a variety of customized Singapore street directory, Asia hotels, Singapore Images, Singapore real estate, Search for Singapore Private Limited Companies, Singapore Wine and Dine Guide, Bus Guide and S.E.A Travel Guide. Our travel guide includes Singapore Travel Guide, Bali Travel Guide, Bali Maps, Jakarta Travel Guide, KL Travel Guide, Malaysia Guide, Johor Guide, Malacca Guide and is widely used by travelers, expats and tourists around the world. Singapore Jobs
http://www.streetdirectory.com/travel_guide/15059/gadgets/how_accurate_is_your_multimeter_and_what_is_true_rms.html
4
Nordic Bronze Age |History of Scandinavia| The Nordic Bronze Age (also Northern Bronze Age) is a period of Scandinavian prehistory from c. 1700–500 BC. The Bronze Age culture of this era succeeded the Late Neolithic Stone Age culture and was followed by the Pre-Roman Iron Age. The archaeological legacy of the Nordic Bronze Age culture is rich, but the ethnic and linguistic affinities of it are unknown, in the absence of written sources. Some scholars also include sites in what is now northern Germany, Pomerania and Estonia in the Baltic region, as part of its cultural sphere. Even though Scandinavians joined the European Bronze Age cultures fairly late through trade, Scandinavian sites presents a rich and well-preserved legacy of bronze and gold objects. These valuable metals were all imported, primarily from Central Europe, but they were often crafted locally and the craftsmanship and metallurgy of the Nordic Bronze Age was of a high standard. The archaeological legacy also comprise locally crafted wool and wooden objects and there are many tumuli and rock carving sites from this period, but no written language existed in the Nordic countries during the Bronze Age. The rock carvings have been dated through comparison with depicted artifacts, for example bronze axes and swords. There are also numerous Nordic Stone Age rock carvings, those of northern Scandinavia mostly portray elk. Thousands of rock carvings from this period depict ships, and the large stone burial monuments known as stone ships, suggest that ships and seafaring played an important role in the culture at large. The depicted ships, most likely represents sewn plank built canoes used for warfare, fishing and trade. These ship types may have their origin as far back as the neolithic period and they continue into the Pre-Roman Iron Age, as exemplified by the Hjortspring boat. Oscar Montelius, who coined the term used for the period, divided it into six distinct sub-periods in his piece Om tidsbestämning inom bronsåldern med särskilt avseende på Skandinavien ("On Bronze Age dating with particular focus on Scandinavia") published in 1885, which is still in wide use. His absolute chronology has held up well against radiocarbon dating, with the exception that the period's start is closer to 1700 BC than 1800 BC, as Montelius suggested. For Central Europe a different system developed by Paul Reinecke is commonly used, as each area has its own artifact types and archaeological periods. A broader subdivision is the Early Bronze Age, between 1700 BC and 1100 BC, and the Late Bronze Age, 1100 BC to 550 BC. These divisions and periods are followed by the Pre-Roman Iron Age. The Nordic Bronze Age was characterized first by a warm climate that began with a climate change around 2700 BC (comparable to that of present-day central Germany and northern France). The warm climate permitted a relatively dense population and good farming; for example, grapes were grown in Scandinavia at this time. A wetter, colder climate prevailed after a minor change in climate between 850 BC and 760 BC, and a more radical one around 650 BC. Religion and cult |Nordic Bronze Age cult| There is no coherent knowledge about the Nordic Bronze Age religion; its pantheon, world view and how it was practised. Written sources are lacking, but archaeological finds draws a vague and fragmented picture of the religious practices and the nature of the religion of this period. Only some possible sects and only certain possible tribes are known. Some of the best clues come from tumuli, elaborate artifacts, votive offerings and rock carvings scattered across Northern Europe. Many finds indicates a strong sun worshipping cult in the Nordic Bronze Age and various animals have been associated with the suns movement across the sky, including horses, birds, snakes and marine creatures (see also Sól). A female or mother goddess is also believed to have been widely worshipped (see Nerthus).[clarification needed] Hieros gamos rites may have been common and there have been several finds of fertility symbols. A pair of twin gods are believed to have been worshipped, and is reflected in a duality in all things sacred: where sacrificial artifacts have been buried they are often found in pairs. Sacrifices (animals, weapons, jewellery and humans) often had a strong connection to bodies of water. Boglands, ponds, streams or lakes were often used as ceremonial and holy places for sacrifices and many artifacts have been found in such locations. Ritual instruments such as bronze lurs have been uncovered, especially in the region of Denmark and western Sweden. Lur horns are also depicted in several rock carvings and are believed to have been used in ceremonies. Bronze Age rock carvings may contain some of the earliest depictions of well-known gods from the later Norse mythology. A common figure in these rock carvings is that of a male figure carrying what appears to be an axe or hammer. This may have been an early representation of Thor. Other male figures are shown holding a spear. Whether this is a representation of Odin or Týr is not known. It is possible the figure may have been a representation of Tyr, as one example of a Bronze Age rock carving appears to show a figure missing a hand. A figure holding a bow may be an early representation of Ullr. Or it is possible that these figures were not gods at all, but men brandishing the weapons of their culture. Remnants of the Bronze Age religion and mythology are believed to exist in Germanic mythology and Norse mythology; e.g., Skinfaxi and Hrímfaxi and Nerthus, and it is believed to itself be descended from an older Indo-European prototype. |Nordic Bronze Age culture| |Wikimedia Commons has media related to Nordic Bronze Age.| - Bronze Age Europe - Bronze Age sword - Egtved Girl - The King's Grave - Stone ships - Pomeranian culture - The carvings have been painted in recent times. It is unknown whether they were painted originally. Composite image. Nordic Bronze Age. - Ling 2008. Elevated Rock Art. GOTARC Serie B. Gothenburg Archaeological Thesis 49. Department of Archaeology and Ancient History, University of Gothenburg, Goumlteborg, 2008. ISBN 978-91-85245-34-5. - Dabrowski, J. (1989) Nordische Kreis un Kulturen Polnischer Gebiete. Die Bronzezeit im Ostseegebiet. Ein Rapport der Kgl. Schwedischen Akademie der Literatur Geschichte und Alter unt Altertumsforschung über das Julita-Symposium 1986. Ed Ambrosiani, B. Kungl. Vitterhets Historie och Antikvitets Akademien. Konferenser 22. Stockholm. - Davidson, H. R. Ellis and Gelling, Peter: The Chariot of the Sun and other Rites and Symbols of the Northern European Bronze Age. - K. Demakopoulou (ed.), Gods and Heroes of the European Bronze Age, published on the occasion of the exhibition "Gods and Heroes of the Bronze Age. Europe at the Time of Ulysses", from December 19, 1998, to April 5, 1999, at the National Museum of Denmark, Copenhagen, London (1999), ISBN 0-500-01915-0. - Demougeot, E. La formation de l'Europe et les invasions barbares, Paris: Editions Montaigne, 1969-1974. - Kaliff, Anders. 2001. Gothic Connections. Contacts between eastern Scandinavia and the southern Baltic coast 1000 BC – 500 AD. - Montelius, Oscar, 1885. Om tidsbestämning inom bronsåldern med särskilt avseende på Skandinavien. - Musset, L. Les invasions: les vagues germanique, Paris: Presses universitaires de France, 1965.
https://en.wikipedia.org/wiki/Nordic_Bronze_Age
4.3125
Trig without Tears Part 4: Trig without Tears Part 4: Summary: The six trig functions were originally defined for acute angles in triangles, but now we define them for any angle (or any number). If you want any of the six function values for an angle that’s not between 0 and 90° (π/2), you just find the function value for the reference angle that is within that interval, and then possibly apply a minus sign. So far we have defined the six trig functions as ratios of sides of a right triangle. In a right triangle, the other two angles must be less than 90°, as suggested by the picture at left. Suppose you draw the triangle in a circle this way, with angle A at the origin and the circle’s radius equal to the hypotenuse of the triangle. The hypotenuse ends at the point on the circle with coordinates (x,y), where x and y are the lengths of the two legs of the triangle. Then using the standard definitions of the trig functions, you have sin A = opposite/hypotenuse = y/r cos A = adjacent/hypotenuse = x/r This is the key to extending the trig functions to any angle. The trig functions had their roots in measuring sides of triangles, and chords of a circle (which is practically the same thing). If we think about an angle in a circle, we can extend the trig functions to work for any angle. In the diagram, the general angle A is drawn in standard position, just as we did above for an acute angle. Just as before, its vertex is at the origin and its initial side lies along the positive x axis. The point where the terminal side of the angle cuts the circle is labeled (x,y). (This particular angle happens to be between 90° and 180° (π/2 and π), and we say it lies in Quadrant II. But you could draw a similar diagram for any angle, even a negative angle or one >360°.) Now let’s define sine and cosine of angle A, in terms of the coordinates (x,y) and the radius r of the circle: (21) sin A = y/r, cos A = x/r This is nothing new. As you saw above when A was in Quadrant I, this is exactly the definition you already know from equation 1: sin A = opposite/hypotenuse, cos A = adjacent/hypotenuse. We’re just extending it to work for any angle. The other function definitions don’t change at all. From equation 3 we still have tan A = sin A / cos A which means that tan A = y/x and the other three functions are still defined as reciprocals (equation 5). Once again, there’s nothing new here: we’ve just extended the original definitions to a larger domain. So why go through this? Well, for openers, not every triangle is an acute triangle. Some have an angle greater than 90°. Even in down-to-earth physical triangles, you’ll have to be concerned with functions of angles greater than 90°. Beyond that, it turns out that all kinds of physical processes vary in terms of sines and cosines as functions of time: height of the tide; length of the day over the course of a year; vibrations of a spring, or of atoms, or of electrons in atoms; voltage and current in an AC circuit; pressure of sound waves, Nearly every periodic process can be described in terms of sines and cosines. And that leads to a subtle shift of emphasis. You started out thinking of trig functions of angles, but really the domain of trig functions is all real numbers, just like most other functions. How can this be? Well, when you think of an “angle” of so-and-so many radians, actually that’s just a pure number. For instance, 30°=π/6. We customarily say “radians” just to distinguish from degrees, but really π/6 is a pure number. When you take sin(π/6), you’re actually evaluating the function sin(x) at x = π/6 (about 0.52), even though traditionally you’re taught to think of π/6 as an angle. We won’t get too far into that in these pages, but here’s an example. If the average water depth is 8 ft in a certain harbor, and the tide varies by ±3 ft, then the height at time t is given by a function that resembles y = 8 + 3 cos(0.52t). (It’s actually more complicated, because high tides don’t come at the same time every day, but that’s the idea.) Coming back from philosophy to the nitty-gritty of computation, how do we find the value of a function when the angle (or number) is outside the range [0;90°] (which is 0 to π/2)? The key is to define a reference angle. Here’s the same picture of angle A again, but with its reference angle added. With angle A in standard position, the reference angle is the acute angle between the terminal side of A and the positive or negative x axis. In this case, angle A is in Q II, the reference angle is 180°−A (π−A). Why? Because the two angles together equal 180° (π). What good does the reference angle do you? Simply this: the six function values for any angle equal the function values for its reference angle, give or take a minus sign. That’s an incredibly powerful statement, if you think about it. In the drawing, A is about 150° and the reference angle is therefore about 30°. Let’s say they’re exactly 150° and 30°, just for discussion. Then sine, cosine, tangent, cotangent, secant, and cosecant of 150° are equal to those same functions of 30°, give or take a minus sign. What’s this “give or take” business? That’s what the next section is about. Remember the extended definitions from equation 21: sin A = y/r, cos A = x/r The radius r is always taken as positive, and therefore the signs of sine and cosine are the same as the signs of y and x. But you know which quadrants have positive or negative y and x, so you know for which angles (or numbers) the sine and cosine are positive or negative. And since the other functions are defined in terms of the sine and cosine, you also know where they are positive or negative. Spend a few minutes thinking about it, and draw some sketches. For instance, is cos 300° positive or negative? Answer: 300° is in Q IV, which is in the right-hand half of the circle. Therefore x is positive, and the cosine must be positive as well. The reference angle is 60° (draw it!), so cos 300° equals cos 60° and not −cos 60°. You can check your thinking against the chart that follows. Whatever you do, don’t memorize the chart! Its purpose is to show you how to reason out the signs of the function values whenever you need them, not to make you waste storage space in your brain. |Signs of Function Values| 0 to 90° 0 to π/2 90 to 180° π/2 to π 180 to 270° π to 3π/2 270 to 360° 3π/2 to 2π What about other angles? Well, 420° = 360°+60°, and therefore 420° ends in the same position in the circle as 60°—it’s just going once around the circle and then an additional 60°. So 420° is in Q I, just like 60°. You can analyze negative angles the same way. Take −45°. That occupies the same place on the circle as +315° (360°−45°). −45° is in Q IV. As you’ve seen, for any function you get the numeric value by considering the reference angle and the positive or negative sign by looking where the angle is. Example: What’s cos 240°? Solution: Draw the angle and see that the reference angle is 60°; remember that the reference angle always goes to the x axis, even if the y axis is closer. cos 60° = ½, and therefore cos 240° will be ½, give or take a minus sign. The angle is in Q III, where x is negative, and therefore cos 240° is negative. Answer: cos 240° = −½. Example: What’s tan(−225°)? Solution: Draw the angle and find the reference angle of 45°. tan 45° = 1. But −225° is in Q II, where x is negative and y is positive; therefore y/x is negative. Answer: tan(−225°) = −1. The techniques we worked out above can be generalized into a set of identities. For instance, if two angles are supplements then you can write one as A and the other as 180°−A. You know that one will be in Q I and the other in Q II, and you also know that one will be the reference angle of the other. Therefore you know at once that the sines of the two angles will be equal, and the cosines of the two will be numerically equal but have opposite signs. This diagram may help: Here you see a unit circle (r = 1) with four identical triangles. Their angles A are at the origin, arranged so that they’re mirror images of each other, and their hypotenuses form radii of the unit circle. Look at the triangle in Quadrant I. Since its hypotenuse is 1, its other two sides are cos A and sin A. The other three triangles are the same size as the first so their sides must be the same length as the sides of the first triangle. But you can also look at the other three radii as belonging to angles 180°−A in Quadrant II, 180°+A in Quadrant III, and −A or 360°−A in Quadrant IV. All the others have a reference angle equal to A. From the symmetry, you can immediately see things like sin(180°+A) = −sin A and cos(−A) = cos A. The relations are summarized below. Don’t memorize them! Just draw a diagram whenever you need them—it’s easiest if you use a hypotenuse of 1. Soon you’ll find that you can quickly visualize the triangles in your mind and you won’t even need to draw a diagram. The identities for tangent are easy to derive: just divide sine by cosine as usual. |sin(180°−A) = sin A sin(π−A) = sin A |cos(180°−A) = −cos A cos(π−A) = −cos A |tan(180°−A) = −tan A tan(π−A) = −tan A |sin(180°+A) = −sin A sin(π+A) = −sin A |cos(180°+A) = −cos A cos(π+A) = −cos A |tan(180°+A) = tan A tan(π+A) = tan A |sin(−A) = −sin A||cos(−A) = cos A||tan(−A) = −tan A| The formulas for negative angles of the other functions drop right out of the definitions equation 3 and equation 5, since you already know the formulas equation 22 for sine and cosine of negative angles. For instance, csc(−A) = 1 / sin(−A) = 1 / −sin A = −1 / sin A = −csc A. (23) cot(−A) = −cot A sec(−A) = sec A csc(−A) = −csc A You can reason out things like whether sec(180°−A) equals sec A or −sec A: just apply the definition and use what you already know about cos(180°−A). You should be able to see that 360° brings you all the way around the circle. That means that an angle of 360°+A or 2π+A is the same as angle A. Therefore the function values are unchanged when you add 360° or a multiple of 360° (or 2π or a multiple) to the angle. Also, if you move in the opposite direction for angle A, that’s the same angle as 360°−A or 2π−A, so the function values of −A and 360°−A (or 2π−A) are the same. For this reason we say that sine and cosine are periodic functions with a period of 360° or 2π. Their values repeat over and over again. Of course secant and cosecant, being reciprocals of cosine and sine, must have the same period. What about tangent and cotangent? They are periodic too, but their period is 180° or π: they repeat twice as fast as the others. You can see this from equation 22: tan(180°+A) = tan A says that the function values repeat every 180°. next: 5/Solving Triangles Updates and new info: http://BrownMath.com/twt/
http://brownmath.com/twt/refangle.htm
4.21875
Arrays Teacher Resources Find Arrays educational ideas and activities Showing 61 - 80 of 1,819 resources Make Multiplication and Division Facts From Arrays #1 How can an array be represented by a number sentence? Young mathematicians work to answer this question, writing two division and two multiplication equations for each given array. With answers provided, this is a quick practice... 3rd - 5th Math Find Area Using Multiplication in Real World Problems Learning math provides people with the tools to solve life's everyday problems. Show young learners how to apply their understanding of area in real-world contexts with the second video in this series. Multiple story problems are... 3 mins 2nd - 4th Math CCSS: Designed
http://www.lessonplanet.com/lesson-plans/arrays/4
4.03125
Definition of Iron Iron: An essential mineral. Iron is necessary for the transport of oxygen (via hemoglobin in red blood cells) and for oxidation by cells (via cytochrome). Deficiency of iron is a common cause of anemia. Food sources of iron include meat, poultry, eggs, vegetables and cereals (especially those fortified with iron). According to the National Academy of Sciences, the Recommended Dietary Allowances of iron for women ages 19 to 50 is 18 milligrams per day and for men ages 19+, 8 milligrams per day. Iron overload can damage the heart, liver, gonads and other organs. Iron overload is a particular risk in people who may have certain genetic conditions (hemochromatosis) sometimes without knowing it and also in people receiving recurrent blood transfusions. Iron supplements meant for adults (such as pregnant women) are a major cause of poisoning in children.Source: MedTerms™ Medical Dictionary Last Editorial Review: 10/30/2013 Medical Dictionary Definitions A - Z Search Medical Dictionary Nutrition and Healthy Eating Resources - Could I Have Binge Eating Disorder? - 11 No-Alcohol Drinks for Diabetes - Are We Close to a Cure for Cancer? - Early Care for Your Premature Baby - What to Eat When You Have Cancer - When to Take More Pain Medication
http://www.emedicinehealth.com/script/main/art.asp?articlekey=4046
4.09375
The upper class in modern societies is the social class composed of the wealthiest members of society, who also wield the greatest political power. According to this view, the upper class is generally contained within the wealthiest 1-2% of the population, and is distinguished by immense wealth (in the form of estates) which is passed on from generation to generation. . Out of the American population one percent of the wealthiest population is responsible for thirty-four percent of the cumulative national wealth. Because the upper classes of a society may no longer rule the society in which they are living they are often referred to as the old upper classes and they are often culturally distinct from the newly rich middle classes that tend to dominate public life in modern social democracies. According to the latter view held by the traditional upper classes no amount of individual wealth or fame would make a person from an undistinguished background into a member of the upper class as one must be born into a family of that class and raised in a particular manner so as to understand and share upper class values, traditions, and cultural norms. The term is often used in conjunction with terms like "upper-middle class," "middle class," and "working class" as part of a model of social stratification. Historically in some cultures, members of an upper class often did not have to work for a living, as they were supported by earned or inherited investments (often real estate), although members of the upper class may have had less actual money than merchants. Upper- class status commonly derived from the social position of one's family and not from one's own achievements or wealth. Much of the population that composed the upper class consisted of aristocrats, ruling families, titled people, and religious hierarchs. These people were usually born into their status and historically there was not much movement across class boundaries. This is to say that it was much harder for an individual to move up in class simply because of the structure of society. In many countries the term "upper class" was intimately associated with hereditary land ownership. Political power was often in the hands of the landowners in many pre-industrial societies despite there being no legal barriers to land ownership for other social classes. Upper-class landowners in Europe were often also members of the titled nobility, though not necessarily: the prevalence of titles of nobility varied widely from country to country. Some upper classes were almost entirely untitled, for example, the Szlachta of the Polish-Lithuanian Commonwealth. In England, Wales, Scotland, and Ireland, the "upper class" traditionally comprised the landed gentry and the aristocracy of noble families with hereditary titles. The vast majority of post-medieval aristocratic families originated in the merchant class and were ennobled between the 14th and 19th centuries while intermarrying with the old nobility and gentry. Since the Second World War, the term has come to encompass rich and powerful members of the managerial and professional classes as well. Members of the English gentry organized the colonization of Virginia and New England and ruled these colonies for generations forming the foundation of the American upper class or East Coast Elite. See main article: American upper class. In the United States the upper class, as distinguished from the rich, is often considered to consist of those families that have for many generations enjoyed top social status based on their leadership in society and their distinctive culture derived from their Upper class ancestors in the colonial gentry. In this respect the US differs little from countries such as the UK where membership of the 'upper class' is also dependent on other factors. In the United Kingdom it has been said that class is relative to where you have come from, similar to the United States where class is more defined by who as opposed to how much; that is, in the UK and the US people are born into the upper class. The American upper class is estimated to constitute less than 1% of the population. By self-identification, according to this 2001-2012 Gallup Poll data, 98% of Americans identify with the 5 other class terms used, 48-50% identifying as "middle class." The main distinguishing feature of the upper class is its ability to derive enormous incomes from wealth through techniques such as money management and investing, rather than engaging in wage-labor or salaried employment. Successful entrepreneurs, CEOs, politicians, investment bankers, venture capitalists, stockbrokers, heirs to fortunes, some lawyers, top flight physicians, and celebrities are considered members of this class by contemporary sociologists, such as James Henslin or Dennis Gilbert. There may be prestige differences between different upper-class households. An A-list actor, for example, might not be accorded as much prestige as a former U.S. President, yet all members of this class are so influential and wealthy as to be considered members of the upper class. At the pinnacle of U.S wealth, 2004 saw a dramatic increase in the numbers of billionaires. According to Forbes Magazine, there are now 374 U.S. billionaires. The growth in billionaires took a dramatic leap since the early 1980s, when the average networth of the individuals on the Forbes 400 list was $400 million. Today, the average net worth is $2.8 billion. Wal-Mart Walton family now has 771,287 times more than the median U.S household. (Collins and Yeskel 322) Since the 1970s income inequality in the United States has been increasing, with the top 1% experiencing significantly larger gains in income than the rest of society. Alan Greenspan, former chair of the Federal Reserve, sees it as a problem for society, calling it a "very disturbing trend." According to the book Who Rules America?, by William Domhoff, the distribution of wealth in America is the primary highlight of the influence of the upper class. The top 1% of Americans own around 34% of the wealth in the U.S. while the bottom 80% own only approximately 16% of the wealth. This large disparity displays the unequal distribution of wealth in America in absolute terms. . Arnold Toynbee. Study of History: Abridgement of Vols I-X in one volume. Oxford University Press. 1960.
http://everything.explained.today/Upper_class/
4.15625
|Search all my sites| ALL ASTRONOMY IMAGES THE SEASONS arise because the Earth (white, green & blue striped sphere) is tilted on its axis (yellow pole through Earth) and this tilt is maintained throughout the Earth's orbit (shown in purple) around the Sun (yellow sphere in the centre). Consequently, the northern and southern hemispheres receive different amounts of sunlight throughout the year. At the start of the animation (which is viewed from slightly north of the plane of the Earth's orbit around the Sun) the Earth is tilted so that the northern hemisphere receives most light (you can see the letter "N" above the North Pole inclined towards the sun). This position corresponds to the northern mid-summer or summer solstice and the southern mid-winter. At this point, the northern hemisphere experiences its longest day and the southern hemisphere its shortest day. As the animation progresses, the Earth moves in an anti-clockwise direction (as viewed from this vantage) to the equinox at middle front of the orbit. The Equinox is the point of equal day and night (from the Latin for equal night). At this point, the tilt of the Earth is directed at right angles to the sun. The Earth continues around to the right of the picture where the tilt is again maximal with respect to the sun. This time, however, the southern hemisphere is maximally pointed towards the sun (you can see the letter "S" below the South Pole is now inclined towards the sun). This corresponds to the southern mid-summer and northern mid-winter (solstice). At this point the southern hemisphere experiences its longest day and the northern hemisphere its shortest day. As the Earth progresses in its orbit around the sun, it passes through another equinox before completing the circuit at the northern mid-summer. This oscillating level of sunlight is heavily imprinted on nature as temperature and daylength vary. In this movie, the Earth is divided into coloured bands:
http://www.rkm.com.au/ANIMATIONS/animation-seasons.html
4.1875
A program is like a recipe. It contains a list of ingredients (called variables) and a list of directions (called statements) that tell the computer what to do with the variables. The variables can represent numeric data, text, or graphical images. There are many programming languages -- C, C++, Pascal, BASIC, FORTRAN, COBOL, and LISP are just a few. These are all high-level languages. One can also write programs in low-level languages called assembly languages, although this is more difficult. Low-level languages are closer to the languageused by a computer, while high-level languages are closer to human languages. When you buy software, you normally buy an executable version of a program. This means that the program is already in machine language -- it has already been compiledand assembled and is ready to execute. (v) To write programs. Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now. Webopedia's student apps roundup will help you to better organize your class schedule and stay on top of assignments and homework. Read More »20 Ways to Shorten a URL If you need to shorten a long URL try this list of 20 free online redirection services. Read More »Top 10 Tech Terms of 2015 The most popular Webopedia definitions of 2015. Read More » This Webopedia study guide describes the different parts of a computer system and their relations. Read More »Network Fundamentals Study Guide Networking fundamentals teaches the building blocks of modern network design. Learn different types of networks, concepts, architecture and... Read More »The Five Generations of Computers Learn about each of the five generations of computers and major technology developments that have led to the current devices that we use today. Read More »
http://www.webopedia.com/TERM/P/program.html
4.1875
Prions, abnormally folded proteins associated with several bizarre human diseases, may hold the key to a major mystery in evolution: how survival skills that require multiple genetic changes arise all at once when each genetic change by itself would be unsuccessful and even harmful. In a study in the September 28, 2000, issue of Nature researchers at the Howard Hughes Institute at the University of Chicago describe a prion-dependent mechanism that seems perfectly suited to solving this dilemma, at least for yeast. It allows yeast to stockpile an arsenal of genetic variation and then release it to express a host of novel characteristics, including the ability to grow well in altered environments. "We found that a heritable genetic element based on protein folding, not encoded in DNA or RNA, allows yeast to acquire many silent changes in their genome and suddenly reveal them," said Susan Lindquist, Ph.D., professor of molecular genetics and cell biology at the University of Chicago, Howard Hughes Investigator and principal author of the study. There are thousands of proteins in every cell and each one has to fold into just the right shape in order to function. In prion diseases, which include mad cow disease and Creutzfeldt-Jakob disease, a normal cell protein, PrP, assumes an abnormal shape. Mis-folded proteins are usually just degraded, but the prion protein causes other PrP proteins to mis-fold, too, creating a protein-folding chain reaction. Thus, they act as infectious agents. As more and more of the proteins fold into the prion shape, they form inactive aggregates which lead to dysfunction and disease. A few years ago geneticists made the startling discovery that yeast, the organism found in bread and beer, has prions, too. Yeast prions are unrelated to the mammalian prions, and don’t harm humans or yeast. They do, however, have the unusual property of mis-folding in the same peculiar way and spreading their change in shape from one protein to another. Mother cells pass these proteins to their daughters, so the change, once it occurs, is inherited from generation to generation. Because yeast prions act much like mammalian prions and are easier to study, scientists hope they will offer clues about how these mis-folding chain reactions get started and how they might be stopped. But the real puzzle is why these things exist in yeast cells in the first place. University of Chicago researchers appear to have found the answer, and it has broad and unexpected implications: the yeast prion seems to play an adaptive role and may greatly influence evolutionary processes. The prion protein they studied is called Sup35. It normally ensures that yeast faithfully translate the genetic code. Specifically, Sup35 recognizes special signals that tell the entire protein production machinery to stop when it is supposed to stop. Sup35 doesn't function in its prion state. As a result, the protein production machinery runs right through the "stop signs." This means that usually silent regions of the genetic code are suddenly expressed. Because these regions are normally not expressed, they don't face selective pressures that prevent mutations from accumulating. The prion therefore uncovers, all at once, a wealth of previously hidden genetic mutations and creates a completely new set of growth properties. Suddenly cells change the kind of food they eat, change their resistance to antibiotics and even grow colonies with completely different shapes. In some cases the prion may simply cause the protein production machinery to read through the "stop sign" at the end of a normal gene. This would create a protein whose function is altered by the addition of a new tail. In other cases the cell machinery may produce a completely new protein from a mutated gene that is not ordinarily translated because it contains a stop signal. The key to its effect is the stable inheritance of the prion state and the normal state. A spontaneous switch between the two states occurs approximately once in a million generations. Because a yeast colony produces a new generation every two hours, in a short time a colony will produce some members that have switched their state. "It’s an ‘all or nothing’ switch, with the changes immediately inherited by all the progeny," said Lindquist. "But because the cell maintains the ability to switch back, the prion switch allows cells to occupy a new niche without losing their capacity to occupy the old." The researchers exposed seven distinct genetic strains of yeast in their prion and non-prion states to 150 different growth conditions. The prion-positive state had a substantial effect on the growth of the yeast in nearly half of the conditions tested. In more than 25 percent of these cases its effects were positive. The incredible diversity of the advantages conveyed by the prions indicated that each strain had different novel genes turned on in its prion-positive state. This prion switch is conserved in yeast across very distantly related genetic strains. Though the switch may have evolved as an accidental consequence of a shape change in an unimportant functioning part of the Sup35, its conservation suggests an evolutionary advantage. "It may be that the prion switch offers yeast a way to respond to commonly fluctuating environments," said Lindquist. "During its evolution S. cerevisiae (brewers’ yeast) must have met with such erratic environments that it needed to maintain a global mechanism for exploiting genome-wide variation." By providing yeast with a way to respond to fluctuating environments, the prion switch may offer a significant evolutionary advantage. "Though we haven’t shown it yet, selective pressure should operate to ‘fix’ the advantageous genes, which could then be read and translated at all times," said Lindquist. Prion mechanisms could be more common than previously suspected and exert an important influence on the rates and mechanisms of evolutionary change. "We need to expand our understanding of inheritance," said Lindquist. "It involves much more than a certain nucleic acid sequence of DNA." Susan L. Lindquist is the Albert D. Lasker Professor of Medical Sciences, Department of Molecular Genetics & Cell Biology at the University of Chicago and a Howard Hughes Medical Institute Investigator. Her co-author is Heather L. True, a Fellow in the Department of Molecular Genetics & Cell Biology at the University of Chicago. The above post is reprinted from materials provided by University Of Chicago Medical Center. Note: Materials may be edited for content and length. Cite This Page:
https://www.sciencedaily.com/releases/2000/09/000928070638.htm
4.5625
Graphing linear inequalities When you are graphing inequalities, you will graph the ordinary linear functions just like we done before. The difference is that the solution to the inequality is not the drawn line but the area of the coordinate plane that satisfies the inequality. The linear inequality divides the coordinate plane into two halves by a boundary line (the line that corresponds to the function). One side of the boundary line contains all solutions to the inequality The boundary line is dashed for > and < and solid for ≥ and ≤. Y ≤ 2x - 4 Here you can see that one side is colored grey and the other side is colored white, to determined which side that represent y ≤ 2x - 4, test a point. We test the point (3;0) which is on the grey side. $$0\leq 2\cdot 3-4$$ The grey side is the side that symbolizes the inequality y ≤ 2x - 4. Graph the inequality y > 2 - 2x
http://www.mathplanet.com/education/pre-algebra/graphing-and-functions/graphing-linear-inequalities
4.0625
You Are Here Activity 4: Story - We Are All One Activity time: 10 minutes Materials for Activity Preparation for Activity - Read through the story a few times. If at all possible, consider telling the story rather than reading it. Practice telling it aloud. Try changing your voice when you are speaking as the ant or the centipede. Description of Activity Before you begin telling the story, "We Are All One," look around the room and make eye contact with each person. Including All Participants There are children for whom it is very difficult to sit still, even when they are paying attention to what is happening around them. This can be frustrating for teachers, as well as for the children who find themselves in situations where they are expected to maintain stillness for prolonged periods of time. If there are children in the group for whom this is the case, consider adopting the use of "fidget objects" as described in the Leader Resources section. Fidget objects can provide a non-disruptive outlet for a child's need to move.
http://www.uua.org/re/tapestry/children/tales/session1/123100.shtml
4
Leveled Literacy Intervention: Overview The Fountas & Pinnell Leveled Literacy Intervention System (LLI) is a small-group, supplementary literacy intervention designed to help teachers provide powerful, daily, small-group instruction for the lowest achieving students at their grade level. Through systematically designed lessons and original, engaging leveled books, LLI supports learning in both reading and writing, helps students expand their knowledge of language and words and how they work. The goal of LLI is to bring students to grade level achievement in reading. Lessons across the seven systems progress from level A (beginning reading in kindergarten) through level Z (represents competencies at the middle and secondary school level) on the F&P Text Level Gradient™. LLI is designed to be used with small groups of students who need intensive support to achieve grade-level competency. Each Level of LLI provides: - Combination of reading, writing, and phonics/word study. - Emphasis on teaching for comprehending strategies. - Explicit attention to genre and to the features of nonfiction and fiction texts. - Special attention to disciplinary reading, literature inquiry, and writing about reading. - Specific work on sounds, letters, and words in activities designed to help students notice the details of written language and learn how words "work." - Close reading to deepen and expand comprehension. - Explicit teaching of effective and efficient strategies for expanding vocabulary. - Explicit teaching for fluent and phrased reading. - Use of writing about reading for the purpose of communicating and learning how to express ideas for a particular purpose and audience using a variety of writing strategies. - Built-in level-by-level descriptions and competencies from The Continuum of Literacy Learning, PreK-8 (2011) to monitor student progress and guide teaching. - Communication tools for informing parents about what children are learning and how they can support them at home. - Technology support for assessment, record keeping, lesson instruction, and home and classroom connections. - Detailed analysis of the characteristics of text difficulty for each book. Explore the resources below to learn more about Leveled Literacy Intervention and how it turns struggling readers into successful readers! - System levels and components - LLI Little Books - Frequently Asked Questions (FAQs) - Research & Efficacy Study - LLI user’s forum discussions - LLI awareness events - Product updates - Request a sampler - Ordering Information – Orange System Levels A-C (Kindergarten) – Orange Booster Packs: Levels D and E – Green System, Levels A-J (Grade 1) – Green Booster Packs: Level K – Blue System, Levels C-N (Grade 2) – Red System, Levels L-Q (Grade 3) – Gold System, Grade 4 (Levels O–T) – Purple System, Levels R-W (Grade 5) – NEW! Teal System, Grade 6-12 (Levels U–Z) Irene C. Fountas & Gay Su Pinnell - Research & Efficacy Study - Upcoming F&P Events - Help Resources - Product Updates - Order Information - Request Samplers - F&P Online Resources Login »
http://www.heinemann.com/fountasandpinnell/lli_overview.aspx
4
Uh oh. There's a chemistry test coming up and your teacher wants you to memorize the entire periodic table of the elements. Great. But luckily, with a bit of time and dedication, you can make recalling the table like recalling the alphabet. It'll be as easy as A, B, C! 1Print out a copy of the periodic table. This will be your Bible for the next couple of weeks. Wherever you go, it will go with you. It's advisable to print out more than one copy. You can highlight and code one however you want and use the next to start over or check if your devices have worked. - Print out a copy. Then, especially if you're a visual or kinesthetic learner, copy it down yourself. It's easier to know the ins and outs of something you've done yourself; the chart will seem less foreign if it's made by you. 2Breakdown the table into smaller sections to learn it. Most charts are already divided by color and type of element, but if that's not working for you, find your own way. You could go by row, column, atomic weight, or simply easiest to hardest. Find patterns that stick out to you. 3Zap into your free time. Try learning the periodic table when not much else can be done, e.g. traveling by public transport or just waiting in the line for something. If you don't have the chart handy (which you should), go over it in your head, concentrating on the ones that are eluding your memory. - Stick with it! Learn a few every day and always review the old ones! If you don't review and quiz yourself, you will forget. 1Create associations. For each element, memorize a short slogan, story or fact that is related to the metal you need to memorize the symbol for. For example, Argentina was named after the metal silver (Argentum -- Ag) because when the Spanish landed there, they thought that the country had lots of silver. - Sometimes, you might make something funny to remember the element -- for example," 'EY! YOU! Give me back my GOLD!" could help as well since the symbol for gold is Au. 2Go for mnemonic devices. That means you'll be using words to associate with each element. They often come in strings or rhymes. Lilly's NAna Kills RuBbish CreatureS FRanticly is an example of a mnemonic device to help remember the alkali metals. - Ignore the easy ones. You're probably pretty confident that hydrogen is "H." Concentrate on the ones that are giving you grief. Here's an example: Darmstadtium is "Ds," right? If you want a mnemonic for that one, try "DARN! STATS for my game were all lost on my Nintendo 'DS' because the power went out!". 3Use pictures. Many people with ridiculously good memories use pictures to associate. Why does everybody know that A is for Apple? Our brains associate words with pictures automatically. Assign each element with a picture -- anything that makes sense to you. - Give the items in your house an element. Label them. Let's say your chair is hydrogen. Label it with a hydrogen bomb, picturing it blowing up. Give your TV a mouth -- it's oxygen and it's breathing. When you go to take your test, close your eyes and walk through your house, recalling all your associations. 4Memorize in song. If Daniel Radcliffe can do it, so can you. You can either create your own or go on the internet and watch the gems that others have created. If you thought one version is a lot, you'll be pleasantly surprised. - And just for people like you, there are karaoke versions, too, to help you check your progress. Isn't the internet amazing? Origins and Patterns, etc. 1Know the Latin names. All symbols can be regarded as English abbreviations, except for ten that have Latin names and abbreviations and one (Wolfram) whose name can be considered of German origin. Excluding Antimony and Tungsten, these are all important and frequently used elements. - Knowing the Latin names as well enables you to decipher most Latin names of inorganic chemicals. In most Romance languages (French, Italian, Spanish, etc.), the present day word is derived from the Latin> 2Zero in on the differences. Element symbols tend to have two letters. This is the full list of element symbols that have only one letter: - Except for may be V, W and Y, these are all important elements on this table. The symbols D and T (not in this list) are sometimes used for the heavier isotopes of Hydrogen (H). D2O is heavy water. 3Know which ones come in threes. Elements may have three letters, though. You are probably not required to learn these. These are all highly radioactive, newly discovered (created) elements, that are likely to get new names when the discoveries are confirmed. Professional chemists often don't use these names either, they call it "Element 113" for instance. Just for the heck of it, here is the full list: 4Spot the unique ones. The last elements that got their names are Flerovium and Livermorium, 114 and 116, whose names were changed from Ununquadium and Ununhexium respectively. The Entire List Printable Periodic Table Questions and Answers Give us 3 minutes of knowledge! - Some websites offer quizzes on the periodic table. If you don't have a friend nearby to help, it's a good alternative. - The noble gases in their correct downward order are important because of their electron configuration. - Test yourself with learning which elements are metals, non metals and the groups the elements are in the what a set of elements are known as, e.g. the noble gases and alkaline metals. - Repeat the elements in your head, over and over, wherever you are. - You probably won't be asked about the newer, man-made elements. These are newly discovered, man-made, radioactive and possibly dangerous elements. Elements beyond 112, except 114 and 116, have not even been named, and only exist briefly after their creation. - Actinoids = Three Planets: Uranus, Neptune, and Pluto. Amy Cured Berkeley, California. Einstein and Fermi Made Noble Laws. - Lanthanoids = Ladies Can't Put Nickels Properly in Slot-machines. Every Girl Tries Daily, However, Every Time You Look. - Make your own periodic table song. Most of the periodic table songs end at 10. You can find your very own catchy beat and make your own periodic table song that surpasses 10. - Be careful not to mix up the elements with the wrong symbols! You have to know them together. - Remember that the first letter of a symbol is a capital letter and the letter/letters after the capital letter are lowercase. In other languages: Italiano: Memorizzare la Tavola Periodica degli Elementi, Español: memorizar la tabla periódica, Français: mémoriser le tableau périodique des éléments, Português: Memorizar a Tabela Periódica, Русский: запомнить таблицу менделеева, 中文: 记忆元素周期表, Deutsch: Das Periodensystem auswendig lernen, Nederlands: Het periodiek systeem uit je hoofd leren, Bahasa Indonesia: Menghafal Tabel Periodik Thanks to all authors for creating a page that has been read 542,690 times.
http://www.wikihow.com/Memorise-the-Periodic-Table
4
Electricity is a general term that refers to the presence and flow of electric charge. For example, electricity is present in lightning, electrical outlets, and static electricity. Recognized since ancient Greece, electricity wasn't harnessed by engineers until the late 19th century, when it began to be used to provide power to homes and businesses. Electricity is used today to power anything from light bulbs to computers to cars. Electricity is most commonly generated by electromagnetic induction, by moving a loop of wire or a disc of copper between the poles of magnet. Electricity can be generated from burning fuels such as natural gas, oil, and coal, from nuclear fission reactions, from windmills or hydroelectric plants, or from solar panels. For the normal force in the drawing to have the same magnitude at all points on the vertical track, the stunt driver must adjust the speed to be different at different points. Suppose, for example, that the track has a radius of 3.98 m and that the driver goes past point 1 at the bottom with a speed of 24.4 m/s. What speed must she have at point 3, so that the normal force at the top has the same magnitude as it did at the bottom?• A Hollow conducting sphere has an inner radius of 15 cm, an outer radius of 32 cm, and an excess charge of +6.5 mC a) What is the surface charge density of the sphere b) Set up the eq to determine the electric field 17 cm from the center of the sphere (don't solve) c) What if b was non-conduting? d) What is the electric field 22 cm from the center of the sphere? 48? e) What is the electric FLUX 48 cm from the center of the sphere? 12? f) What is the electic potential 100 cm from the center of the sphere? Use ref electric potential of zero volts at infinity• A student walking to class on a cold day (T=0 C) finds a silver ring with an inner diameter of D=1.8 cm. The silver has a coefficient of expansion of a=18.7 x 10^ -6 a) Input an expression for the rings inner diameter Dh when the student warms it up to their body temperature, b) What is the change in diameter in mm if Tb = 37 C For a) I tried: All were WRONG For b) I tried: All were WRONG too A double-slit has been set up with light falling on a screen where an interference pattern can be seen. Calculate the distance (in cm) between the central and first order bright fringes for 633-nm light falling on double slits separated by 0.0800 mm, located 3.00 m from a screen.• Capacitors can be made on semiconductor integrated circuits essentially as parallel-plate capacitors, although it is difficult to achieve high capacitance values (and also challenging to create precise values). Assume that you want a 16.0 pF capacitor and that the layers of the integrated circuit allow you to have a spacing of 1.20 μm. Assume that the material has a dielectric constant of 3.20. What is the necessary area of the plates on this integrated circuit capacitor?• The bodies A and B each weigh 32.2 lb and are connected by a rigid bar whose mass may be neglected. The two planes are smooth and the velocity of A is 5.0 fps to the right in the position shown. Determine the acceleration of A at this instant. (Note: AB is a two-force member)• Join Chegg Study and get: Guided textbook solutions created by Chegg experts Learn from step-by-step solutions for 9,000 textbooks in Math, Science, Engineering, Business and more 24/7 Study Help Answers in a pinch from experts and subject enthusiasts all semester long In science there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important science concepts and terms are, and even once you’ve identified them you still need to understand what they mean. To help you learn and understand key science terms and concepts, we’ve identified some of the most important ones and provided detailed definitions for them, written and compiled by Chegg experts.
http://www.chegg.com/homework-help/definitions/electricity-2?cp=CHEGGFREESHIP
4.0625
A map is oriented when it is made to correspond to the ground it represents. Remember, on a topographic map, north is the top of the map. There are four ways to orient a map: I. BY COMPASS With a protractor, draw a magnetic north line anywhere on your map. The declination diagram in the margin of the map will give you the direction and size of the angle between grid north and magnetic north. Do not use the margin diagram itself as the angles are often exaggerated by the cartographer so that the numerical values for the angle can be inserted. Place the compass on the magnetic north line and turn the map and compass together slowly until the needle points to magnetic north on the map. The adjacent Diagram I shows a compass, oriented to north, placed on top of a line drawn on the map pointing to magnetic north. II. BY DISTANT OBJECTS If you know your position on the map and can identify the position of some distant object, turn the map so that it corresponds with the ground. As shown in Diagram II below, the map reader uses a church, identified by eye and on the map, to orient the map to the ground. III. BY WATCH AND SUN - (in the Northern Hemisphere) If Daylight Saving Time is in effect (in summer), first set your watch back to Standard Time. Place the watch flat with the hour hand pointing toward the sun. True South is midway between the hour hand and XII. True North is directly opposite. Note; this method is very approximate. Diagram III below shows a watch with the hour hand pointing to "3" and pointing to the sun. South is therefore determined to be midway between "12" and "3". IV. BY THE STARS In latitudes below 60° North, the bearing of Polaris is never more than 2° from True North. Diagram IV below shows how to locate Polaris using the two stars that form the front of the "dipper" of the Big Dipper. Polaris can be found at the end of a line that joins these two stars that is extended beyond the open end of the dipper at a distance 5 times the distance between these two stars. - Date Modified:
http://www.nrcan.gc.ca/earth-sciences/geography/topographic-information/maps/9797
4.125
(adj.)Refers to the transmission of data in just one direction at a time. For example, a walkie-talkie is a half-duplex device because only one party can talk at a time. In contrast, a telephone is a full-duplex device because both parties can talk simultaneously. Duplex modes often are used in reference to network data transmissions. Some modems contain a switch that lets you select between half-duplex and full-duplex modes. The correct choice depends on which program you are using to transmit data through the modem. In half-duplex mode, each character transmitted is immediately displayed on your screen. (For this reason, it is sometimes called local echo -- characters are echoed by the local device). In full-duplex mode, transmitted data is not displayed on your monitor until it has been received and returned (remotely echoed) by the other device. If you are running a communications program and every character appears twice, it probably means that your modem is in half-duplex mode when it should be in full-duplex mode, and every character is being both locally and remotely echoed. Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now. Webopedia's student apps roundup will help you to better organize your class schedule and stay on top of assignments and homework. Read More »20 Ways to Shorten a URL If you need to shorten a long URL try this list of 20 free online redirection services. Read More »Top 10 Tech Terms of 2015 The most popular Webopedia definitions of 2015. Read More » This Webopedia study guide describes the different parts of a computer system and their relations. Read More »Network Fundamentals Study Guide Networking fundamentals teaches the building blocks of modern network design. Learn different types of networks, concepts, architecture and... Read More »The Five Generations of Computers Learn about each of the five generations of computers and major technology developments that have led to the current devices that we use today. Read More »
http://www.webopedia.com/TERM/H/half_duplex.html
4
In this 1984 historical photo from the U.S. space agency, the X-29 Flight Research Aircraft features one of the most unusual designs in aviation history. Demonstrating forward swept wing technology, this aircraft investigated numerous advanced aviation concepts and technologies. The fighter-size X-29 explored the use of advanced composites in aircraft construction, variable camber wing surfaces, an unique forward-swept-wing and its thin supercritical airfoil, and strake flaps. The X-29 also demonstrated three specific aerodynamic effects: canard effects, active controls, and aeroelastic tailoring. Canard effects use canards (small wings) to function as another control surface to manipulate air flow. Active controls enable an airplane to pull air across the plane in specific directions rather than passively letting the air flow over it. Aeroelastic tailoring allows parts of an aircraft to flex slightly when air hits it in a certain way to allow for maximum flexibility of air flow. Although the X-29 was one of the most instable of the X-series in maneuvering capabilities, it was controlled by a computerized fly-by-wire flight control system that overcame the instability going further than any other aircraft testing the limits of computer controls. The first flight was December 14, 1984.
http://www.space.com/21793-x29-in-flight.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+spaceheadlines+%28SPACE.com+Headline+Feed%29
4.40625
Calculating variance allows you to measure how far a set of numbers is spread out. Variance is one of the descriptors of probability distribution, and it describes how far numbers lie from the mean. Variance is often used in conjunction with standard deviation, which is the square root of the variance. If you want to know how to calculate the variance of a set of data points, just follow these steps. Help Calculating Variance 1Write down the formula for calculating variance. The formula for measuring an unbiased estimate of the population variance from a fixed sample of n observations is the following:(s2) = Σ [(xi - x̅)2]/n - 1. The formula for calculating the variance in an entire population is the same as this one except the denominator is n, not n - 1, but it should not be used any time you are working with a finite sample of observations. Here's what the parts of the formula for calculating variance mean: - s2 = Variance - Σ = Summation, which means the sum of every term in the equation after the summation sign. - xi = Sample observation. This represents every term in the set. - x̅ = The mean. This represents the average of all the numbers in the set. - n = The sample size. You can think of this as the number of terms in the set. 2Calculate the sum of the terms. First, create a chart that has a column for observations (terms), the mean (x̅), the mean subtracted from the terms (xi - x̅) and then the square of these terms [(xi - x̅)2)]. After you've made the chart and placed all of the terms in the first column, simply add up all of the numbers in the set. Let's say you're working with the following numbers: 17, 15, 23, 7, 9, 13. Just add them up: 17 + 15 + 23 + 7 + 9 + 13 = 84. 3Calculate the mean of the terms. To find the mean of any set of terms, simply add up the terms and divide the result by the number of terms. In this case, you already know that the sum of the terms is 84. Since there are 6 terms, just divide 84 by 6 to find the mean. 84/6 = 14. Write "14" all the way down the column for the mean. 4Subtract the mean from each term. To fill the third column, simply take each term from the sample observations and subtract it from 14, the sample mean. You can check your work by adding up all of the results and confirming that they add up to zero. Here's how to subtract each sample observation from the average: - 17 - 14 = 3 - 15 - 14 = 1 - 23 - 14 = 9 - 7 - 14 = -7 - 9 - 14 = -5 - 13 - 14 = -1 5Square each result. Now that you've subtracted the average from each sample observation, simply square each result and write the answer in the fourth column. Remember that all of your results will be positive. Here's how to do it: - 32 = 9 - 12 = 1 - 92 = 81 - (-7)2 = 49 - (-5)2 = 25 - (-1)2 = 1 6Calculate the sum of the squared terms. Now simply add up all of the new terms. 9 + 1 + 81 + 49 + 25 + 1 = 166 7Substitute the values into the original equation. Just plug in the values into the original equation, remembering that "n" represents the number of data points. - s2 = 166/(6-1) 8Solve. Simply divide 166 by 5. The result is 33.2 If you'd like to find the standard deviation, simply find the square root of 33.2. √33.2 = 5.76. Now you can interpret this data in a larger context. Usually, the variance between two sets of data are compared, and the lower number indicates less variation within that data set. Questions and Answers Give us 3 minutes of knowledge! - Since it is difficult to interpret the variance, this value is usually only calculated as a start in calculating the standard deviation. In other languages: Español: calcular la varianza, Italiano: Calcolare la Varianza, Deutsch: Varianz berechnen, Français: calculer la variance, Русский: посчитать дисперсию случайной величины, 中文: 计算方差, Português: Calcular a Variância, Nederlands: Variantie berekenen, Bahasa Indonesia: Menghitung Variasi Thanks to all authors for creating a page that has been read 795,615 times.
http://www.wikihow.com/Calculate-Variance
4.21875
A rainbow-like feature known as a ‘glory’ has been seen by ESA’s Venus Express orbiter in the atmosphere of our nearest neighbour – the first time one has been fully imaged on another planet. Rainbows and glories occur when sunlight shines on cloud droplets – water particles in the case of Earth. While rainbows arch across wide swathes of the sky, glories are typically much smaller and comprise a series of coloured concentric rings centred on a bright core. Global Satellite-based Earth Observation Market 2016-2020 Glories are only seen when the observer is situated directly between the Sun and the cloud particles that are reflecting sunlight. On Earth, they are often seen from aeroplanes, surrounding the shadow of the aircraft on the clouds below, or around the shadow of climbers atop misty mountain peaks. A glory requires two characteristics: the cloud particles are spherical, and therefore most likely liquid droplets, and they are all of a similar size. The atmosphere of Venus is thought to contain droplets rich in sulphuric acid. By imaging the clouds with the Sun directly behind the Venus Express spacecraft, scientists hoped to spot a glory in order to determine important characteristics of the cloud droplets. They were successful. The glory in the images here was seen at the Venus cloud tops, 70 km above the planet’s surface, on 24 July 2011. It is 1200 km wide as seen from the spacecraft, 6000 km away. From these observations, the cloud particles are estimated to be 1.2 micrometres across, roughly a fiftieth of the width of a human hair. The fact that the glory is 1200 km wide means that the particles at the cloud tops are uniform on this scale at least. The variations of brightness of the rings of the observed glory is different than that expected from clouds of only sulphuric acid mixed with water, suggesting that other chemistry may be at play. One idea is that the cause is the “UV-absorber”, an unknown atmospheric component responsible for mysterious dark markings seen in the cloud tops of Venus at ultraviolet wavelengths. More investigation is needed to draw a firm conclusion. Your company’s press release on ASDNews and to thousands of other journalists and editors? Use our ASDWire press release distribution service. Source : European Space Agency (ESA)
http://www.asdnews.com/news-53939/Venus_glory.htm
4.09375
Did you know that you can use an equation to solve a problem? Creating a model for a problem may also include methods such as drawing a diagram or picture or making a table or chart. Take a look at this dilemma. The triangles below were constructed using toothpicks. Determine the number of toothpicks needed to construct twenty triangles. Do you know how to figure this out? Pay attention to this Concept. Then you will know how to solve this dilemma. Sometimes if you think of a problem in terms of words and parts it will be easier to write an equation and solve it. Writing a verbal model is similar to making a plan for solving a problem. When you write a verbal model, you are paraphrasing the information stated in the problem. After writing a verbal model, insert the values from the problem to write an equation. Then, use mental math or an inverse operation to solve it. Take a look at this situation. Monica purchased a pair of tennis shoes on sale for $65.99. The shoes were originally $99.00. Use a verbal model to write and solve an equation to determine the amount of money Monica saved by purchasing the shoes on sale First write a verbal model to represent the problem. Let “” represent the amount saved. Solution: Recall that to solve for “,” complete the inverse operation. Since addition is used in the equation, use subtraction to solve. It makes sense to subtract 65.99 from 99.00. This is the answer. Write an equation for each situation and solve it. Mary had $12.00 and she spent some amount. She has $4.50 left over. How much did she spend? John spent twice as much as Mary did. How much did he spend? A number and sixteen is equal to forty-five. Now let's go back to the dilemma from the beginning of the Concept. As you can see, three toothpicks were needed to construct one triangle. Two more were needed to construct the second triangle. Therefore, five toothpicks were used to make two triangles. Continue to make more triangles along the row. Each time you construct a new triangle, record the number of toothpicks used on a chart. |Triangle #:||Toothpick #| Looking at the table, you can identify a pattern. You can see that two toothpicks are needed each time a new triangle is constructed. You can write a verbal model to express this amount. Total Number of Toothpicks Needed = Two Times the Number of Triangles + One Toothpick Let number of triangles Total Number of Toothpicks Needed To determine the number of toothpicks needed to construct twenty triangles, substitute twenty for the variable. 41 toothpicks are needed to construct twenty triangles. - a group of numbers, operations and variables where the quantity on one side of the equal sign is the same as the quantity on the other side of the equal sign. - Inverse Operation - the opposite operation. Equation can often be solved by using an inverse operation. - Verbal Model - using words to decipher the mathematical information in a problem. An equation can often be written from a verbal model. Here is one for you to try on your own. The cost to run a thirty second commercial on prime time television is seven hundred fifty-thousand dollars. Use a verbal model to write and solve an equation to determine the cost per second. Let “” represent the unknown cost per second. Solution: To solve, divide 750,000 by 30. Now remember that we were talking about money in this problem. So our answer needs to be written as a money amount. The answer is that is costs $25,000 per second for a thirty second commercial. Directions: Write an equation for each situation and then solve for the variable. Each problem will have two answer to it. 1. An unknown number and three is equal to twelve. 2. John had a pile of golf balls. He lost nine on the course. If he returned home with fourteen golf balls, how many did he start with? 3. Some number and six is equal to thirty. 4. Jessie owes her brother some money. She earned nine dollars and paid off some of her debt. If she still owes him five dollars, how much did she owe to begin with? 5. A farmer has chickens. Six of them went missing during a snowstorm. If there are twelve chickens left, how many did he begin with before the storm? 6. Gasoline costs four dollars per gallon. Kerry put many gallons in his car over a long car trip. If he spent a total of $140.00 on gasoline, how many gallons did he need for the trip? 7. Twenty-seven times a number is 162. What is the number? 8. Marsha divided cookies into groups of 12. If she had 6 dozen cookies when she was done, how many cookies did she start with? 9. The coach divided the students into five teams. There were fourteen students on each team. How many students did the coach begin with? 10. A number plus nineteen is equal to forty.
http://www.ck12.org/algebra/Sentences-as-Single-Variable-Equations/lesson/Solve-Real-World-Problems-by-Writing-and-Solving-Single-Variable-Equations/r12/
4
The change in albedo of arid lands is an indicator of changes in their condition and quality, including density of vegetative cover, erosion, deposition, surficial soil moisture, and man-made change. In general, darkening of an arid land surface indicates an increase in land quality while brightening indicates a decrease in quality, primarily owing to changes in vegetation. Landsat multiband images taken on different dates can be converted to black-and-white albedo images. Subtraction of one image from another, pixel by pixel, results in an albedo change map that can be density sliced to show areas that have brightened or darkened by selected percentages. These maps are then checked in the field to determine the reasons for the changes and to evaluate the changes in land condition and quality. The albedo change mapping technique has been successfully used in the arid lands of western Utah and northern Arizona and has recently been used for detection of coal strip mining activities in northern Alabama. ?? 1983. Additional publication details Space platform albedo measurements as indicators of change in arid lands
https://pubs.er.usgs.gov/publication/70011446
4.3125
In computer programming, a null-terminated string is a character string stored as an array containing the characters and terminated with a null character ( '\0', called NUL in ASCII). Alternative names are C string, which refers to the C programming language and ASCIIZ (note that C strings do not imply the use of ASCII). The length of a C string is found by searching for the (first) NUL byte. This can be slow as it takes O(n) (linear time) with respect to the string length. It also means that a NUL cannot be inside the string, as the only NUL is the one marking the end. Null-terminated strings were produced by the .ASCIZ directive of the PDP-11 assembly languages and the ASCIZ directive of the MACRO-10 macro assembly language for the PDP-10. These predate the development of the C programming language, but other forms of strings were often used. At the time C (and the languages that it was derived from) was developed, memory was extremely limited, so using only one byte of overhead to store the length of a string was attractive. The only popular alternative at that time, usually called a "Pascal string" (though also used by early versions of BASIC), used a leading byte to store the length of the string. This allows the string to contain NUL and made finding the length need only one memory access (O(1) (constant) time). However, C designer Dennis Ritchie chose to follow the convention of NUL-termination, already established in BCPL, to avoid the limitation on the length of a string caused by holding the count in an 8- or 9-bit slot, and partly because maintaining the count seemed, in his experience, less convenient than using a terminator. This had some influence on CPU instruction set design. Some CPUs in the 1970s and 1980s, such as the Zilog Z80 and the DEC VAX, had dedicated instructions for handling length-prefixed strings. However, as the NUL-terminated string gained traction, CPU designers began to take it into account, as seen for example in IBM's decision to add the "Logical String Assist" instructions to the ES/9000 520 in 1992. |This section requires expansion. (November 2011)| - Determining the length of a string - Copying one string to another - Appending (concatenating) one string to another - Finding the first (or last) occurrence of a character within a string - Finding within a string the first occurrence of a character in (or not in) a given set - Finding the first occurrence of a substring within a string - Comparing two strings lexicographically - Splitting a string into multiple substrings - Formatting numeric or string values into a printable output string - Parsing a printable string into numeric values - Converting between single-byte and wide character string encodings - Converting single-byte or wide character strings to and from multi-byte character strings While simple to implement, this representation has been prone to errors and performance problems. The NUL termination has historically created security problems. A NUL byte inserted into the middle of a string will truncate it unexpectedly. A common bug was to not allocate the additional space for the NUL, so it was written over adjacent memory. Another was to not write the NUL at all, which was often not detected during testing because a NUL was already there by chance from previous use of the same block of memory. Due to the expense of finding the length, many programs did not bother before copying a string to a fixed-size buffer, causing a buffer overflow if it was too long. The inability to store a NUL requires that string data and binary data be kept distinct and handled by different functions (with the latter requiring the length of the data to also be supplied). This can lead to code redundancy and errors when the wrong function is used. The speed problems with finding the length can usually be mitigated by combining it with another operation that is O(n) anyway, such as in strlcpy. However, this does not always result in an intuitive API. Null-terminated strings require of the encoding that it does not use the zero code anywhere. It is not possible to store every possible ASCII or UTF-8 string in a null-terminated string, as the encoding of the NUL character is a zero byte. However, it is common to store the subset of ASCII or UTF-8 -- every character except the NUL character -- in null-terminated strings. Some systems use "modified UTF-8" which encodes the NUL character as two non-zero bytes (0xC0, 0x80) and thus allow all possible strings to be stored. (this is not allowed by the UTF-8 standard as it is a security risk. A C0,80 NUL might be seen as a string terminator in security validation and as a character when used) UTF-16 uses 2-byte integers and since either byte may be zero, cannot be stored in a null-terminated byte string. However, some languages implement a string of 16-bit UTF-16 characters, terminated by a 16-bit NUL character. (Again the NUL character, which encodes as a single zero code unit, is the only character that cannot be stored). Many attempts have been made to make C string handling less error prone. One strategy is to add safer and more useful functions such as strlcpy, while deprecating the use of unsafe functions such as gets. Another is to add an object-oriented wrapper around C strings so that only safe calls can be done. On modern systems memory usage is less of a concern, so a multi-byte length is acceptable (if you have so many small strings that the space used by this length is a concern, you will have enough duplicates that a hash table will use even less memory). Most replacements for C strings use a 32-bit or larger length value. Examples include the C++ Standard Template Library std::string, the Qt QString, the MFC CString, and the C-based implementation CFString from Core Foundation as well as its Objective-C sibling NSString from Foundation, both by Apple. More complex structures may also be used to store strings such as the rope. - Dennis M. Ritchie (1993). [The development of the C language]. Proc. 2nd History of Programming Languages Conf. - Kamp, Poul-Henning (25 July 2011), "The Most Expensive One-byte Mistake", ACM Queue 9 (7), ISSN 1542-7730, retrieved 2 August 2011 - Richie, Dennis (2003). "The Development of the C Language". Retrieved 9 November 2011. - Rain Forest Puppy (9 September 1999). "Perl CGI problems". Phrack Magazine (artofhacking.com) 9 (55): 7. Retrieved 6 January 2012. - "UTF-8, a transformation format of ISO 10646". Retrieved 19 September 2013. - "Unicode/UTF-8-character table". Retrieved 13 September 2013. - Kuhn, Markus. "UTF-8 and Unicode FAQ". Retrieved 13 September 2013.
https://en.wikipedia.org/wiki/Null-terminated_string
4.28125
Proficient readers ask themselves questions about a text. Asking and answering questions like "what's important here?" and "who's speaking now?" helps readers interact with the text and engage prior knowledge. They are also addressing Common Core State Standards related to key ideas and details and the integration of knowledge and ideas. Teach struggling readers' how to engage in self-questioning to increase engagement and comprehension. Use hypertext and collaborative documents to support your students' experimentation with this approach. Click here for a version of this slideshow which can be used with a screen-reader and is 508 Compliant. Using a self-questioning strategy can encourage struggling learners to monitor their understanding of the text. Your clear explanations can highlight critical features of the self-questioning approach, especially when you integrate a range of technology tools as suggested below.
http://powerupwhatworks.org/strategy-guide/self-questioning
4.0625
A broad, 4-kilometer-tall feature on the seafloor about 1500 kilometers east of Japan is the world’s largest volcano, a new analysis suggests. At its tallest point, Tamu Massif (at lower left and center in main image; oblique view in inset) lies more than 2 km below the ocean’s surface. Unlike most volcanic seamounts, which are steep and typically no more than a few tens of kilometers across, the gently sloping Tamu Massif covers 310,000 square kilometers—about the same as the British Isles, or the base of Mars’s Olympus Mons, the solar system’s largest known volcano. (Its base is shown in dark purple at lower right, for comparison.) The massif’s slopes are exceptionally shallow, often less than 1°, thanks to lava that flowed freely before hardening. Researchers think the Tamu Massif is a single volcano because rock samples (labeled dots) have similar chemistry, and seismic surveys show that broad layers of rock emanate from the center of the feature. Today, Tamu Massif sits far from the edge of the Pacific tectonic plate and is presumed dead, but 145 million years ago the caldera plumbed the intersection of three tectonic plates, the researchers note today in Nature Geoscience. They haven’t finished dating rock samples drilled from the peak, but it’s possible that the entire seamount could have been formed in a million years or less.
http://www.sciencemag.org/news/2013/09/scienceshot-massive-undersea-volcano-world-s-largest
4.0625
It does this by taking energy from a power supply and controlling the output to match the input signal shape but with a larger amplitude. In this sense, an amplifier modulates the output of the power supply to make the output signal stronger than the input signal. An amplifier is effectively the opposite of an attenuator: while an amplifier provides gain, an attenuator provides loss. An amplifier can either be a separate piece of equipment or an electrical circuit within another device. The ability to amplify is fundamental to modern electronics, and amplifiers are widely used in almost all electronic equipment. The types of amplifiers can be categorized in different ways. One is by the frequency of the electronic signal being amplified; audio amplifiers amplify signals in the audio (sound) range of less than 20 kHz, RF amplifiers amplify frequencies in the radio frequency range between 20 kHz and 300 GHz. Another is which quantity, voltage or current is being amplified; amplifiers can be divided into voltage amplifiers, current amplifiers, transconductance amplifiers, and transresistance amplifiers. A further distinction is whether the output is a linear or nonlinear representation of the input. Amplifiers can also be categorized by their physical placement in the signal chain. The first practical electronic device that could amplify was the Audion (triode) vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers. The terms "amplifier" and "amplification" (from the Latin amplificare, 'to enlarge or expand') were first used for this new capability around 1915 when triodes became widespread. For the next 50 years, vacuum tubes were the only devices that could amplify. All amplifiers used them until the 1960s, when transistors appeared. Most amplifiers today use transistors, though tube amplifiers are still produced. - 1 Figures of merit - 2 Amplifier types - 2.1 Power amplifier - 2.2 Vacuum-tube (valve) amplifiers - 2.3 Transistor amplifiers - 2.4 Magnetic amplifiers - 2.5 Operational amplifiers (op-amps) - 2.6 Fully differential amplifiers - 2.7 Video amplifiers - 2.8 Oscilloscope vertical amplifiers - 2.9 Distributed amplifiers - 2.10 Switched mode amplifiers - 2.11 Negative resistance devices - 2.12 Microwave amplifiers - 2.13 Musical instrument amplifiers - 3 Classification of amplifier stages and systems - 4 Power amplifier classes - 4.1 Conduction angle classes - 4.2 Class A - 4.3 Class B - 4.4 Class AB - 4.5 Class C - 4.6 Class D - 4.7 Additional classes - 5 Implementation - 6 See also - 7 References - 8 External links Figures of merit Amplifier quality is characterized by a list of specifications that include: - Gain, the ratio between the magnitude of output and input signals - Bandwidth, the width of the useful frequency range - Efficiency, the ratio between the power of the output and total power consumption - Linearity, the degree of proportionality between input and output - Noise, a measure of undesired noise mixed into the output - Output dynamic range, the ratio of the largest and the smallest useful output levels - Slew rate, the maximum rate of change of the output - Rise time, settling time, ringing and overshoot that characterize the step response - Stability, the ability to avoid self-oscillation Amplifiers are described according to their input and output properties. They exhibit the property of gain, or multiplication factor that relates the magnitude of the output signal to the input signal. The gain may be specified as the ratio of output voltage to input voltage (voltage gain), output power to input power (power gain), or some combination of current, voltage, and power. In many cases, with input and output in the same unit, gain is unitless (though often expressed in decibels (dB)). The four basic types of amplifiers are as follows: - Voltage amplifier – This is the most common type of amplifier. An input voltage is amplified to a larger output voltage. The amplifier's input impedance is high and the output impedance is low. - Current amplifier – This amplifier changes an input current to a larger output current. The amplifier's input impedance is low and the output impedance is high. - Transconductance amplifier – This amplifier responds to a changing input voltage by delivering a related changing output current. - Transresistance amplifier – This amplifier responds to a changing input current by delivering a related changing output voltage. Other names for the device are transimpedance amplifier and current-to-voltage converter. In practice, amplifier power gain depends on the source and load impedances, as well as the inherent voltage and current gain. A radio frequency (RF) amplifier design typically optimizes impedances for power transfer, while audio and instrumentation amplifier designs normally optimize input and output impedance for least loading and highest signal integrity. An amplifier that is said to have a gain of 20 dB might have a voltage gain of ten times and an available power gain of much more than 20 dB (power ratio of 100)—yet actually deliver a much lower power gain if, for example, the input is from a 600 ohm microphone and the output connects to a 47 kilohm input socket for a power amplifier. In most cases, an amplifier is linear. That is, it provides constant gain for any normal input level and output signal. If the gain is not linear, e.g., clipping of the signal, the output signal distorts. There are, however, cases where variable gain is useful. Certain signal processing applications use exponential gain amplifiers. Many different electronic amplifier types exist that are specific to areas such as: radio and television transmitters and receivers, high-fidelity ("hi-fi") stereo equipment, microcomputers and other digital equipment, and guitar and other instrument amplifiers. Essential components include active devices, such as vacuum tubes or transistors. A brief introduction to the many types of electronic amplifiers follows. The term power amplifier is a relative term with respect to the amount of power delivered to the load and/or provided by the power supply circuit. In general the power amplifier is the last 'amplifier' or actual circuit in a signal chain (the output stage) and is the amplifier stage that requires attention to power efficiency. Efficiency considerations lead to the various classes of power amplifier based on the biasing of the output transistors or tubes: see power amplifier classes. Power amplifiers by application - Audio power amplifiers: Speakers allows client to use both sides to maximize volume, but each side receives half of what it could potentially supply. - RF power amplifier—typical in transmitter final stages (see also: Linear amplifier) - Servo motor controllers amplify a control voltage where linearity is not important - Piezoelectric audio amplifier—includes a DC-to-DC converter to generate the high voltage output required to drive piezoelectric speakers Power amplifier circuits Power amplifier circuits include the following types: - Vacuum tube/valve, hybrid or transistor power amplifiers - Push-pull output or single-ended output stages Vacuum-tube (valve) amplifiers According to Symons, while semiconductor amplifiers have largely displaced valve amplifiers for low power applications, valve amplifiers are much more cost effective in high power applications such as "radar, countermeasures equipment, or communications equipment" (p. 56). Many microwave amplifiers are specially designed valves, such as the klystron, gyrotron, traveling wave tube, and crossed-field amplifier, and these microwave valves provide much greater single-device power output at microwave frequencies than solid-state devices (p. 59). Valves/tube amplifiers also have following uses in other areas, such as - electric guitar amplification - in Russian military aircraft, for their electromagnetic pulse (EMP) tolerance - niche audio for their sound qualities (recording, and audiophile equipment) The essential role of this active element is to magnify an input signal to yield a significantly larger output signal. The amount of magnification (the "forward gain") is determined by the external circuit design as well as the active device. Many common active devices in transistor amplifiers are bipolar junction transistors (BJTs) and metal oxide semiconductor field-effect transistors (MOSFETs). Applications are numerous, some common examples are audio amplifiers in a home stereo or PA system, RF high power generation for semiconductor equipment, to RF and Microwave applications such as radio transmitters. Transistor-based amplifier can be realized using various configurations: for example with a bipolar junction transistor we can realize common base, common collector or common emitter amplifier; using a MOSFET we can realize common gate, common source or common drain amplifier. Each configuration has different characteristic (gain, impedance...). These are devices somewhat similar to a transformer where one winding is used to control the saturation of a magnetic core and hence alter the impedance of the other winding. They have largely fallen out of use due to development in semiconductor amplifiers but are still useful in HVDC control, and in nuclear power control circuitry to their not being affected by radioactivity. Operational amplifiers (op-amps) An operational amplifier is an amplifier circuit with very high open loop gain and differential inputs that employs external feedback to control its transfer function, or gain. Though the term today commonly applies to integrated circuits, the original operational amplifier design used valves, and later designs used discrete transistor circuits. Fully differential amplifiers A fully differential amplifier is a solid state integrated circuit amplifier that uses external feedback to control its transfer function or gain. It is similar to the operational amplifier, but also has differential output pins. These are usually constructed using BJTs or FETs. These deal with video signals and have varying bandwidths depending on whether the video signal is for SDTV, EDTV, HDTV 720p or 1080i/p etc.. The specification of the bandwidth itself depends on what kind of filter is used—and at which point (-1 dB or -3 dB for example) the bandwidth is measured. Certain requirements for step response and overshoot are necessary for an acceptable TV image. Oscilloscope vertical amplifiers These deal with video signals that drive an oscilloscope display tube, and can have bandwidths of about 500 MHz. The specifications on step response, rise time, overshoot, and aberrations can make designing these amplifiers difficult. One of the pioneers in high bandwidth vertical amplifiers was the Tektronix company. These use transmission lines to temporally split the signal and amplify each portion separately to achieve higher bandwidth than possible from a single amplifier. The outputs of each stage are combined in the output transmission line. This type of amplifier was commonly used on oscilloscopes as the final vertical amplifier. The transmission lines were often housed inside the display tube glass envelope. Switched mode amplifiers These nonlinear amplifiers have much higher efficiencies than linear amps, and are used where the power saving justifies the extra complexity. Negative resistance devices Travelling wave tube amplifiers Traveling wave tube amplifiers (TWTAs) are used for high power amplification at low microwave frequencies. They typically can amplify across a broad spectrum of frequencies; however, they are usually not as tunable as klystrons. Klystrons are specialized linear-beam vacuum-devices, designed to provide high power, widely tunable amplification of millimetre and sub-millimetre waves. Klystrons are designed for large scale operations and despite having a narrower bandwidth than TWTAs, they have the advantage of coherently amplifying a reference signal so its output may be precisely controlled in amplitude, frequency and phase. Musical instrument amplifiers An audio power amplifier is usually used to amplify signals such as music or speech. In the mid 1960s, amplifiers began to gain popularity because of its relatively low price ($50) and guitars being the most popular instruments as well. Several factors are especially important in the selection of musical instrument amplifiers (such as guitar amplifiers) and other audio amplifiers (although the whole of the sound system – components such as microphones to loudspeakers – affect these parameters): - Frequency response – not just the frequency range but the requirement that the signal level varies so little across the audible frequency range that the human ear notices no variation. A typical specification for audio amplifiers may be 20 Hz to 20 kHz +/- 0.5 dB. - Power output – the power level obtainable with little distortion, to obtain a sufficiently loud sound pressure level from the loudspeakers. - Low distortion – all amplifiers and transducers distort to some extent. They cannot be perfectly linear, but aim to pass signals without affecting the harmonic content of the sound more than the human ear can tolerate. That tolerance of distortion, and indeed the possibility that some "warmth" or second harmonic distortion (Tube sound) improves the "musicality" of the sound, are subjects of great debate. Before coming onto the music scene, amplifiers were heavily used in cinema. In the premiere of Noah's Ark in 1929, the movie's director (Michael Kurtiz) used the amplifier for a festival following the movie's premiere. Classification of amplifier stages and systems |This section does not cite any sources. (October 2008)| Many alternative classifications address different aspects of amplifier designs, and they all express some particular perspective relating the design parameters to the objectives of the circuit. Amplifier design is always a compromise of numerous factors, such as cost, power consumption, real-world device imperfections, and a multitude of performance specifications. Below are several different approaches to classification: Input and output variables Electronic amplifiers use one variable presented as either a current and voltage. Either current or voltage can be used as input and either as output, leading to four types of amplifiers. In idealized form they are represented by each of the four types of dependent source used in linear analysis, as shown in the figure, namely: |Input||Output||Dependent source||Amplifier type| |I||I||Current controlled current source CCCS||Current amplifier| |I||V||Current controlled voltage source CCVS||Transresistance amplifier| |V||I||Voltage controlled current source VCCS||Transconductance amplifier| |V||V||Voltage controlled voltage source VCVS||Voltage amplifier| Each type of amplifier in its ideal form has an ideal input and output resistance that is the same as that of the corresponding dependent source: |Amplifier type||Dependent source||Input impedance||Output impedance| In practice the ideal impedances are only approximated. For any particular circuit, a small-signal analysis is often used to find the impedance actually achieved. A small-signal AC test current Ix is applied to the input or output node, all external sources are set to AC zero, and the corresponding alternating voltage Vx across the test current source determines the impedance seen at that node as R = Vx / Ix. Amplifiers designed to attach to a transmission line at input and/or output, especially RF amplifiers, do not fit into this classification approach. Rather than dealing with voltage or current individually, they ideally couple with an input and/or output impedance matched to the transmission line impedance, that is, match ratios of voltage to current. Many real RF amplifiers come close to this ideal. Although, for a given appropriate source and load impedance, RF amplifiers can be characterized as amplifying voltage or current, they fundamentally are amplifying power. One set of classifications for amplifiers is based on which device terminal is common to both the input and the output circuit. In the case of bipolar junction transistors, the three classes are common emitter, common base, and common collector. For field-effect transistors, the corresponding configurations are common source, common gate, and common drain; for triode vacuum devices, common cathode, common grid, and common plate. The common emitter (or common source, common cathode, etc.) is most often configured to provide amplification of a voltage applied between base and emitter, and the output signal taken between collector and emitter is inverted, relative to the input. The common collector arrangement applies the input voltage between base and collector, and to take the output voltage between emitter and collector. This causes negative feedback, and the output voltage tends to 'follow' the input voltage. (This arrangement is also used as the input presents a high impedance and does not load the signal source, though the voltage amplification is less than 1 (unity).) The common-collector circuit is, therefore, better known as an emitter follower, source follower, or cathode follower, Unilateral or bilateral An amplifier whose output exhibits no feedback to its input side is described as 'unilateral'. The input impedance of a unilateral amplifier is independent of load, and output impedance is independent of signal source impedance. An amplifier that uses feedback to connect part of the output back to the input is a bilateral amplifier. Bilateral amplifier input impedance depends on the load, and output impedance on the signal source impedance. All amplifiers are bilateral to some degree; however they may often be modeled as unilateral under operating conditions where feedback is small enough to neglect for most purposes, simplifying analysis (see the common base article for an example). An amplifier design often deliberately applies negative feedback to tailor amplifier behavior. Some feedback, positive or negative, is unavoidable and often undesirable—introduced, for example, by parasitic elements, such as inherent capacitance between input and output of devices such as transistors, and capacitive coupling of external wiring. Excessive frequency-dependent positive feedback can turn an amplifier into an oscillator. Linear unilateral and bilateral amplifiers can be represented as two-port networks. Inverting or non-inverting Another way to classify amplifiers is by the phase relationship of the input signal to the output signal. An 'inverting' amplifier produces an output 180 degrees out of phase with the input signal (that is, a polarity inversion or mirror image of the input as seen on an oscilloscope). A 'non-inverting' amplifier maintains the phase of the input signal waveforms. An emitter follower is a type of non-inverting amplifier, indicating that the signal at the emitter of a transistor is following (that is, matching with unity gain but perhaps an offset) the input signal. Voltage follower is also non inverting type of amplifier having unity gain. This description can apply to a single stage of an amplifier, or to a complete amplifier system. Other amplifiers may be classified by their function or output characteristics. These functional descriptions usually apply to complete amplifier systems or sub-systems and rarely to individual stages. - A servo amplifier indicates an integrated feedback loop to actively control the output at some desired level. A DC servo indicates use at frequencies down to DC levels, where the rapid fluctuations of an audio or RF signal do not occur. These are often used in mechanical actuators, or devices such as DC motors that must maintain a constant speed or torque. An AC servo amp can do this for some ac motors. - A linear amplifier responds to different frequency components independently, and does not generate harmonic distortion or Intermodulation distortion. No amplifier can provide perfect linearity (even the most linear amplifier has some nonlinearities, since the amplifying devices—transistors or vacuum tubes—follow nonlinear power laws such as square-laws and rely on circuitry techniques to reduce those effects). - A nonlinear amplifier generates significant distortion and so changes the harmonic content; there are situations where this is useful. Amplifier circuits intentionally providing a non-linear transfer function include: - a device like a Silicon Controlled Rectifier or a transistor used as a switch may be employed to turn either fully ON or OFF a load such as a lamp based on a threshold in a continuously variable input. - a non-linear amplifier in an analog computer or true RMS converter for example can provide a special transfer function, such as logarithmic or square-law. - a Class C RF amplifier may be chosen because it can be very efficient—but is non-linear. Following such an amplifier with a "tank" tuned circuit can reduce unwanted harmonics (distortion) sufficiently to make it useful in transmitters, or some desired harmonic may be selected by setting the resonant frequency of the tuned circuit to a higher frequency rather than fundamental frequency in frequency multiplier circuits. - Automatic gain control circuits require an amplifier's gain be controlled by the time-averaged amplitude so that the output amplitude varies little when weak stations are being received. The non-linearities are assumed arranged so the relatively small signal amplitude suffers from little distortion (cross-channel interference or intermodulation) yet is still modulated by the relatively large gain-control DC voltage. - AM detector circuits that use amplification such as Anode-bend detectors, Precision rectifiers and Infinite impedance detectors (so excluding unamplified detectors such as Cat's-whisker detectors), as well as peak detector circuits, rely on changes in amplification based on the signal's instantaneous amplitude to derive a direct current from an alternating current input. - Operational amplifier comparator and detector circuits. - A wideband amplifier has a precise amplification factor over a wide frequency range, and is often used to boost signals for relay in communications systems. A narrowband amp amplifies a specific narrow range of frequencies, to the exclusion of other frequencies. - An RF amplifier amplifies signals in the radio frequency range of the electromagnetic spectrum, and is often used to increase the sensitivity of a receiver or the output power of a transmitter. - An audio amplifier amplifies audio frequencies. This category subdivides into small signal amplification, and power amps that are optimised to driving speakers, sometimes with multiple amps grouped together as separate or bridgeable channels to accommodate different audio reproduction requirements. Frequently used terms within audio amplifiers include: - Preamplifier (preamp), which may include a phono preamp with RIAA equalization, or tape head preamps with CCIR equalisation filters. They may include filters or tone control circuitry. - Power amplifier (normally drives loudspeakers), headphone amplifiers, and public address amplifiers. - Stereo amplifiers imply two channels of output (left and right), though the term simply means "solid" sound (referring to three-dimensional)—so quadraphonic stereo was used for amplifiers with four channels. 5.1 and 7.1 systems refer to Home theatre systems with 5 or 7 normal spacial channels, plus a subwoofer channel. - Buffer amplifiers, which may include emitter followers, provide a high impedance input for a device (perhaps another amplifier, or perhaps an energy-hungry load such as lights) that would otherwise draw too much current from the source. Line drivers are a type of buffer that feeds long or interference-prone interconnect cables, possibly with differential outputs through twisted pair cables. - A special type of amplifier - originally used in analog computers - is widely used in measuring instruments for signal processing, and many other uses. These are called operational amplifiers or op-amps. The "operational" name is because this type of amplifier can be used in circuits that perform mathematical algorithmic functions, or "operations" on input signals to obtain specific types of output signals. Modern op-amps are usually provided as integrated circuits, rather than constructed from discrete components. A typical modern op-amp has differential inputs (one "inverting", one "non-inverting") and one output. An idealised op-amp has the following characteristics: - Infinite input impedance (so it does not load the circuitry at its input) - Zero output impedance - Infinite gain - Zero propagation delay The performance of an op-amp with these characteristics is entirely defined by the (usually passive) components that form a negative feedback loop around it. The amplifier itself does not affect the output. All real-world op-amps fall short of the idealised specification above—but some modern components have remarkable performance and come close in some respects. Interstage coupling method Amplifiers are sometimes classified by the coupling method of the signal at the input, output, or between stages. Different types of these include: - Resistive-capacitive (RC) coupled amplifier, using a network of resistors and capacitors - By design these amplifiers cannot amplify DC signals as the capacitors block the DC component of the input signal. RC-coupled amplifiers were used very often in circuits with vacuum tubes or discrete transistors. In the days of the integrated circuit a few more transistors on a chip are much cheaper and smaller than a capacitor. - Inductive-capacitive (LC) coupled amplifier, using a network of inductors and capacitors - This kind of amplifier is most often used in selective radio-frequency circuits. - Transformer coupled amplifier, using a transformer to match impedances or to decouple parts of the circuits - Quite often LC-coupled and transformer-coupled amplifiers cannot be distinguished as a transformer is some kind of inductor. - Direct coupled amplifier, using no impedance and bias matching components - This class of amplifier was very uncommon in the vacuum tube days when the anode (output) voltage was at greater than several hundred volts and the grid (input) voltage at a few volts minus. So they were only used if the gain was specified down to DC (e.g., in an oscilloscope). In the context of modern electronics developers are encouraged to use directly coupled amplifiers whenever possible. Depending on the frequency range and other properties amplifiers are designed according to different principles. - Frequency ranges down to DC are only used when this property is needed. DC amplification leads to specific complications that are avoided if possible; DC-blocking capacitors can be added to remove DC and sub-sonic frequencies from audio amplifiers. - Depending on the frequency range specified different design principles must be used. Up to the MHz range only "discrete" properties need be considered; e.g., a terminal has an input impedance. - As soon as any connection within the circuit gets longer than perhaps 1% of the wavelength of the highest specified frequency (e.g., at 100 MHz the wavelength is 3 m, so the critical connection length is approx. 3 cm) design properties radically change. For example, a specified length and width of a PCB trace can be used as a selective or impedance-matching entity. - Above a few hundred MHz, it gets difficult to use discrete elements, especially inductors. In most cases, PCB traces of very closely defined shapes are used instead. The frequency range handled by an amplifier might be specified in terms of bandwidth (normally implying a response that is 3 dB down when the frequency reaches the specified bandwidth), or by specifying a frequency response that is within a certain number of decibels between a lower and an upper frequency (e.g. "20 Hz to 20 kHz plus or minus 1 dB"). Power amplifier classes Power amplifier circuits (output stages) are classified as A, B, AB and C for analog designs—and class D and E for switching designs based on the proportion of each input cycle (conduction angle), during which an amplifying device passes current. The image of the conduction angle derives from amplifying a sinusoidal signal. If the device is always on, the conducting angle is 360°. If it is on for only half of each cycle, the angle is 180°. The angle of flow is closely related to the amplifier power efficiency. The various classes are introduced below, followed by a more detailed discussion under their individual headings further down. Conduction angle classes - Class A - 100% of the input signal is used (conduction angle Θ = 360°). The active element remains conducting all of the time. - Class B - 50% of the input signal is used (Θ = 180°); the active element carries current half of each cycle, and is turned off for the other half. - Class AB - Class AB is intermediate between class A and B, the two active elements conduct more than half of the time - Class C - Less than 50% of the input signal is used (conduction angle Θ < 180°). A "Class D" amplifier uses some form of pulse-width modulation to control the output devices; the conduction angle of each device is no longer related directly to the input signal but instead varies in pulse width. These are sometimes called "digital" amplifiers because the output device is switched fully on or off, and not carrying current proportional to the signal amplitude. - Additional classes - There are several other amplifier classes, although they are mainly variations of the previous classes. For example, class-G and class-H amplifiers are marked by variation of the supply rails (in discrete steps or in a continuous fashion, respectively) following the input signal. Wasted heat on the output devices can be reduced as excess voltage is kept to a minimum. The amplifier that is fed with these rails itself can be of any class. These kinds of amplifiers are more complex, and are mainly used for specialized applications, such as very high-power units. Also, class-E and class-F amplifiers are commonly described in literature for radio-frequency applications where efficiency of the traditional classes is important, yet several aspects deviate substantially from their ideal values. These classes use harmonic tuning of their output networks to achieve higher efficiency and can be considered a subset of class C due to their conduction-angle characteristics. Amplifying devices operating in class A conduct over the entire range of the input cycle. A class-A amplifier is distinguished by the output stage devices being biased for class A operation. Subclass A2 is sometimes used to refer to vacuum-tube class-A stages that drive the grid slightly positive on signal peaks for slightly more power than normal class A (A1; where the grid is always negative). This, however, incurs higher signal distortion. Advantages of class-A amplifiers - Class-A designs are simpler than other classes; for example class -AB and -B designs require two connected devices in the circuit (push–pull output), each to handle one half of the waveform; class A can use a single device (single-ended). - The amplifying element is biased so the device is always conducting, the quiescent (small-signal) collector current (for transistors; drain current for FETs or anode/plate current for vacuum tubes) is close to the most linear portion of its transconductance curve. - Because the device is never 'off' there is no "turn on" time, no problems with charge storage, and generally better high frequency performance and feedback loop stability (and usually fewer high-order harmonics). - The point where the device comes closest to being 'off' is not at 'zero signal', so the problems of crossover distortion associated with class-AB and -B designs is avoided. - Best for low signal levels of radio receivers due to low distortion. Disadvantage of class-A amplifiers - Class-A amplifiers are inefficient. A theoretical efficiency of 50% is obtainable in a push-pull topology, and only 25% in a single-ended topology, unless deliberate use of nonlinearities is made (such as in square-law output stages). In a power amplifier, this not only wastes power and limits operation with batteries, but increases operating costs and requires higher-rated output devices. Inefficiency comes from the standing current that must be roughly half the maximum output current, and a large part of the power supply voltage is present across the output device at low signal levels. If high output power is needed from a class-A circuit, the power supply and accompanying heat becomes significant. For every watt delivered to the load, the amplifier itself, at best, uses an extra watt. For high power amplifiers this means very large and expensive power supplies and heat sinks. Class-A power amplifier designs have largely been superseded by more efficient designs, though they remain popular with some hobbyists, mostly for their simplicity. There is a market for expensive high fidelity class-A amps considered a "cult item" among audiophiles mainly for their absence of crossover distortion and reduced odd-harmonic and high-order harmonic distortion. Single-ended and triode class-A amplifiers Some hobbyists who prefer class-A amplifiers also prefer the use of thermionic valve (or "tube") designs instead of transistors, for several reasons: - Single-ended output stages have an asymmetrical transfer function, meaning that even-order harmonics in the created distortion tend to not cancel out (as they do in push–pull output stages). For tubes, or FETs, most distortion is second-order harmonics, from the square law transfer characteristic, which to some produces a "warmer" and more pleasant sound. - For those who prefer low distortion figures, the use of tubes with class A (generating little odd-harmonic distortion, as mentioned above) together with symmetrical circuits (such as push–pull output stages, or balanced low-level stages) results in the cancellation of most of the even distortion harmonics, hence the removal of most of the distortion. - Historically, valve amplifiers often used a class-A power amplifier simply because valves are large and expensive; many class-A designs use only a single device. Transistors are much cheaper, and so more elaborate designs that give greater efficiency but use more parts are still cost-effective. A classic application for a pair of class-A devices is the long-tailed pair, which is exceptionally linear, and forms the basis of many more complex circuits, including many audio amplifiers and almost all op-amps. Class-A amplifiers are often used in output stages of high quality op-amps (although the accuracy of the bias in low cost op-amps such as the 741 may result in class A or class AB or class B performance, varying from device to device or with temperature). They are sometimes used as medium-power, low-efficiency, and high-cost audio power amplifiers. The power consumption is unrelated to the output power. At idle (no input), the power consumption is essentially the same as at high output volume. The result is low efficiency and high heat dissipation. Class-B amplifiers only amplify half of the input wave cycle, thus creating a large amount of distortion, but their efficiency is greatly improved and is much better than class A. Class-B amplifiers are also favoured in battery-operated devices, such as transistor radios. Class B has a maximum theoretical efficiency of π/4 (≈ 78.5%). This is because the amplifying element is switched off altogether half of the time, and so cannot dissipate power. A single class-B element is rarely found in practice, though it has been used for driving the loudspeaker in the early IBM Personal Computers with beeps, and it can be used in RF power amplifier where the distortion levels are less important. However, class C is more commonly used for this. A practical circuit using class-B elements is the push–pull stage, such as the very simplified complementary pair arrangement shown below. Here, complementary or quasi-complementary devices are each used for amplifying the opposite halves of the input signal, which is then recombined at the output. This arrangement gives excellent efficiency, but can suffer from the drawback that there is a small mismatch in the cross-over region – at the "joins" between the two halves of the signal, as one output device has to take over supplying power exactly as the other finishes. This is called crossover distortion. An improvement is to bias the devices so they are not completely off when they are not in use. This approach is called class AB operation. Class B amplifiers offer higher efficiency than class A amplifier using a single active device. Class AB is widely considered a good compromise for amplifiers, since much of the time the music signal is quiet enough that the signal stays in the "class A" region, where it is amplified with good fidelity, and by definition if passing out of this region, is large enough that the distortion products typical of class B are relatively small. The crossover distortion can be reduced further by using negative feedback. In class-AB operation, each device operates the same way as in class B over half the waveform, but also conducts a small amount on the other half. As a result, the region where both devices simultaneously are nearly off (the "dead zone") is reduced. The result is that when the waveforms from the two devices are combined, the crossover is greatly minimised or eliminated altogether. The exact choice of quiescent current (the standing current through both devices when there is no signal) makes a large difference to the level of distortion (and to the risk of thermal runaway, that may damage the devices). Often, bias voltage applied to set this quiescent current must be adjusted with the temperature of the output transistors. (For example, in the circuit at the beginning of the article, the diodes would be mounted physically close to the output transistors, and specified to have a matched temperature coefficient.) Another approach (often used with thermally tracking bias voltages) is to include small value resistors in series with the emitters. Class AB sacrifices some efficiency over class B in favor of linearity, thus is less efficient (below 78.5% for full-amplitude sinewaves in transistor amplifiers, typically; much less is common in class-AB vacuum-tube amplifiers). It is typically much more efficient than class A. Sometimes a numeral is added for vacuum-tube stages. If grid current is not permitted to flow, the class is AB1. If grid current is allowed to flow (adding more distortion, but giving slightly higher output power) the class is AB2. Class-C amplifiers conduct less than 50% of the input signal and the distortion at the output is high, but high efficiencies (up to 90%) are possible. The usual application for class-C amplifiers is in RF transmitters operating at a single fixed carrier frequency, where the distortion is controlled by a tuned load on the amplifier. The input signal is used to switch the active device causing pulses of current to flow through a tuned circuit forming part of the load. The class-C amplifier has two modes of operation: tuned and untuned. The diagram shows a waveform from a simple class-C circuit without the tuned load. This is called untuned operation, and the analysis of the waveforms shows the massive distortion that appears in the signal. When the proper load (e.g., an inductive-capacitive filter plus a load resistor) is used, two things happen. The first is that the output's bias level is clamped with the average output voltage equal to the supply voltage. This is why tuned operation is sometimes called a clamper. This restores the waveform to its proper shape, despite the amplifier having only a one-polarity supply. This is directly related to the second phenomenon: the waveform on the center frequency becomes less distorted. The residual distortion is dependent upon the bandwidth of the tuned load, with the center frequency seeing very little distortion, but greater attenuation the farther from the tuned frequency that the signal gets. The tuned circuit resonates at one frequency, the fixed carrier frequency, and so the unwanted frequencies are suppressed, and the wanted full signal (sine wave) is extracted by the tuned load. The signal bandwidth of the amplifier is limited by the Q-factor of the tuned circuit but this is not a serious limitation. Any residual harmonics can be removed using a further filter. In practical class-C amplifiers a tuned load is invariably used. In one common arrangement the resistor shown in the circuit above is replaced with a parallel-tuned circuit consisting of an inductor and capacitor in parallel, whose components are chosen to resonate the frequency of the input signal. Power can be coupled to a load by transformer action with a secondary coil wound on the inductor. The average voltage at the drain is then equal to the supply voltage, and the signal voltage appearing across the tuned circuit varies from near zero to near twice the supply voltage during the rf cycle. The input circuit is biased so that the active element (e.g., transistor) conducts for only a fraction of the RF cycle, usually one third (120 degrees) or less. The active element conducts only while the drain voltage is passing through its minimum. By this means, power dissipation in the active device is minimised, and efficiency increased. Ideally, the active element would pass only an instantaneous current pulse while the voltage across it is zero: it then dissipates no power and 100% efficiency is achieved. However practical devices have a limit to the peak current they can pass, and the pulse must therefore be widened, to around 120 degrees, to obtain a reasonable amount of power, and the efficiency is then 60–70%. In the class-D amplifier the active devices (transistors) function as electronic switches instead of linear gain devices; they are either on or off. The analog signal is converted to a stream of pulses that represents the signal by pulse width modulation, pulse density modulation, delta-sigma modulation or a related modulation technique before being applied to the amplifier. The time average power value of the pulses is directly proportional to the analog signal, so after amplification the signal can be converted back to an analog signal by a passive low-pass filter. The purpose of the output filter is to smooth the pulse stream to an analog signal, removing the high frequency spectral components of the pulses. The frequency of the output pulses is typically ten or more times the highest frequency in the input signal to amplify, so that the filter can adequately reduce the unwanted harmonics and accurately reproduce the input. The main advantage of a class-D amplifier is power efficiency. Because the output pulses have a fixed amplitude, the switching elements (usually MOSFETs, but vacuum tubes, and at one time bipolar transistors, were used) are switched either completely on or completely off, rather than operated in linear mode. A MOSFET operates with the lowest resistance when fully on and thus (excluding when fully off) has the lowest power dissipation when in that condition. Compared to an equivalent class-AB device, a class-D amplifier's lower losses permit the use of a smaller heat sink for the MOSFETs while also reducing the amount of input power required, allowing for a lower-capacity power supply design. Therefore, class-D amplifiers are typically smaller than an equivalent class-AB amplifier. Another advantage of the class-D amplifier is that it can operate from a digital signal source without requiring a digital-to-analog converter (DAC) to convert the signal to analog form first. If the signal source is in digital form, such as in a digital media player or computer sound card, the digital circuitry can convert the binary digital signal directly to a pulse width modulation signal that is applied to the amplifier, simplifying the circuitry considerably. Class-D amplifiers are widely used to control motors—but are now also used as power amplifiers, with extra circuitry that converts analogue to a much higher frequency pulse width modulated signal. Switching power supplies have even been modified into crude class-D amplifiers (though typically these only reproduce low-frequencies with acceptable accuracy). High quality class-D audio power amplifiers have now appeared on the market. These designs have been said to rival traditional AB amplifiers in terms of quality. An early use of class-D amplifiers was high-power subwoofer amplifiers in cars. Because subwoofers are generally limited to a bandwidth of no higher than 150 Hz, switching speed for the amplifier does not have to be as high as for a full range amplifier, allowing simpler designs. Class-D amplifiers for driving subwoofers are relatively inexpensive in comparison to class-AB amplifiers. The letter D used to designate this amplifier class is simply the next letter after C and, although occasionally used as such, does not stand for digital. Class-D and class-E amplifiers are sometimes mistakenly described as "digital" because the output waveform superficially resembles a pulse-train of digital symbols, but a class-D amplifier merely converts an input waveform into a continuously pulse-width modulated analog signal. (A digital waveform would be pulse-code modulated.) The class-E/F amplifier is a highly efficient switching power amplifier, typically used at such high frequencies that the switching time becomes comparable to the duty time. As said in the class-D amplifier, the transistor is connected via a serial LC circuit to the load, and connected via a large L (inductor) to the supply voltage. The supply voltage is connected to ground via a large capacitor to prevent any RF signals leaking into the supply. The class-E amplifier adds a C (capacitor) between the transistor and ground and uses a defined L1 to connect to the supply voltage. The following description ignores DC, which can be added easily afterwards. The above-mentioned C and L are in effect a parallel LC circuit to ground. When the transistor is on, it pushes through the serial LC circuit into the load and some current begins to flow to the parallel LC circuit to ground. Then the serial LC circuit swings back and compensates the current into the parallel LC circuit. At this point the current through the transistor is zero and it is switched off. Both LC circuits are now filled with energy in C and L0. The whole circuit performs a damped oscillation. The damping by the load has been adjusted so that some time later the energy from the Ls is gone into the load, but the energy in both C0 peaks at the original value to in turn restore the original voltage so that the voltage across the transistor is zero again and it can be switched on. With load, frequency, and duty cycle (0.5) as given parameters and the constraint that the voltage is not only restored, but peaks at the original voltage, the four parameters (L, L0, C and C0) are determined. The class-E amplifier takes the finite on resistance into account and tries to make the current touch the bottom at zero. This means that the voltage and the current at the transistor are symmetric with respect to time. The Fourier transform allows an elegant formulation to generate the complicated LC networks and says that the first harmonic is passed into the load, all even harmonics are shorted and all higher odd harmonics are open. Class E uses a significant amount of second-harmonic voltage. The second harmonic can be used to reduce the overlap with edges with finite sharpness. For this to work, energy on the second harmonic has to flow from the load into the transistor, and no source for this is visible in the circuit diagram. In reality, the impedance is mostly reactive and the only reason for it is that class E is a class F (see below) amplifier with a much simplified load network and thus has to deal with imperfections. In many amateur simulations of class-E amplifiers, sharp current edges are assumed nullifying the very motivation for class E and measurements near the transit frequency of the transistors show very symmetric curves, which look much similar to class-F simulations. The class-E amplifier was invented in 1972 by Nathan O. Sokal and Alan D. Sokal, and details were first published in 1975. Some earlier reports on this operating class have been published in Russian. In push–pull amplifiers and in CMOS, the even harmonics of both transistors just cancel. Experiment shows that a square wave can be generated by those amplifiers. Theoretically square waves consist of odd harmonics only. In a class-D amplifier, the output filter blocks all harmonics; i.e., the harmonics see an open load. So even small currents in the harmonics suffice to generate a voltage square wave. The current is in phase with the voltage applied to the filter, but the voltage across the transistors is out of phase. Therefore, there is a minimal overlap between current through the transistors and voltage across the transistors. The sharper the edges, the lower the overlap. While in class D, transistors and the load exist as two separate modules, class F admits imperfections like the parasitics of the transistor and tries to optimise the global system to have a high impedance at the harmonics. Of course there must be a finite voltage across the transistor to push the current across the on-state resistance. Because the combined current through both transistors is mostly in the first harmonic, it looks like a sine. That means that in the middle of the square the maximum of current has to flow, so it may make sense to have a dip in the square or in other words to allow some overswing of the voltage square wave. A class-F load network by definition has to transmit below a cutoff frequency and reflect above. Any frequency lying below the cutoff and having its second harmonic above the cutoff can be amplified, that is an octave bandwidth. On the other hand, an inductive-capacitive series circuit with a large inductance and a tunable capacitance may be simpler to implement. By reducing the duty cycle below 0.5, the output amplitude can be modulated. The voltage square waveform degrades, but any overheating is compensated by the lower overall power flowing. Any load mismatch behind the filter can only act on the first harmonic current waveform, clearly only a purely resistive load makes sense, then the lower the resistance, the higher the current. Class F can be driven by sine or by a square wave, for a sine the input can be tuned by an inductor to increase gain. If class F is implemented with a single transistor, the filter is complicated to short the even harmonics. All previous designs use sharp edges to minimise the overlap. Classes G and H |This section needs additional citations for verification. (June 2014)| There is a variety of amplifier designs that enhance class-AB output stages with more efficient techniques to achieve greater efficiencies with low distortion. These designs are common in large audio amplifiers since the heatsinks and power transformers would be prohibitively large (and costly) without the efficiency increases. The terms "class G" and "class H" are used interchangeably to refer to different designs, varying in definition from one manufacturer or paper to another. Class-G amplifiers (which use "rail switching" to decrease power consumption and increase efficiency) are more efficient than class-AB amplifiers. These amplifiers provide several power rails at different voltages and switch between them as the signal output approaches each level. Thus, the amplifier increases efficiency by reducing the wasted power at the output transistors. Class-G amplifiers are more efficient than class AB but less efficient when compared to class D, however, they do not have the electromagnetic interference effects of class D. Class-H amplifiers take the idea of class G one step further creating an infinitely variable supply rail. This is done by modulating the supply rails so that the rails are only a few volts larger than the output signal at any given time. The output stage operates at its maximum efficiency all the time. Switched-mode power supplies can be used to create the tracking rails. Significant efficiency gains can be achieved but with the drawback of more complicated supply design and reduced THD performance. In common designs, a voltage drop of about 10V is maintained over the output transistors in Class H circuits. The picture above shows positive supply voltage of the output stage and the voltage at the speaker output. The boost of the supply voltage is shown for a real music signal. The voltage signal shown is thus a larger version of the input, but has been changed in sign (inverted) by the amplification. Other arrangements of amplifying device are possible, but that given (that is, common emitter, common source or common cathode) is the easiest to understand and employ in practice. If the amplifying element is linear, the output is a faithful copy of the input, only larger and inverted. In practice, transistors are not linear, and the output only approximates the input. nonlinearity from any of several sources is the origin of distortion within an amplifier. The class of amplifier (A, B, AB or C) depends on how the amplifying device is biased. The diagrams omit the bias circuits for clarity. Any real amplifier is an imperfect realization of an ideal amplifier. An important limitation of a real amplifier is that the output it generates is ultimately limited by the power available from the power supply. An amplifier saturates and clips the output if the input signal becomes too large for the amplifier to reproduce or exceeds operational limits for the device. The Doherty, a hybrid configuration, is currently receiving renewed attention. It was invented in 1934 by William H. Doherty for Bell Laboratories—whose sister company, Western Electric, manufactured radio transmitters. The Doherty amplifier consists of a class-B primary or carrier stages in parallel with a class-C auxiliary or peak stage. The input signal splits to drive the two amplifiers, and a combining network sums the two output signals. Phase shifting networks are used in inputs and outputs. During periods of low signal level, the class-B amplifier efficiently operates on the signal and the class-C amplifier is cutoff and consumes little power. During periods of high signal level, the class-B amplifier delivers its maximum power and the class-C amplifier delivers up to its maximum power. The efficiency of previous AM transmitter designs was proportional to modulation but, with average modulation typically around 20%, transmitters were limited to less than 50% efficiency. In Doherty's design, even with zero modulation, a transmitter could achieve at least 60% efficiency. As a successor to Western Electric for broadcast transmitters, the Doherty concept was considerably refined by Continental Electronics Manufacturing Company of Dallas, TX. Perhaps, the ultimate refinement was the screen-grid modulation scheme invented by Joseph B. Sainton. The Sainton amplifier consists of a class-C primary or carrier stage in parallel with a class-C auxiliary or peak stage. The stages are split and combined through 90-degree phase shifting networks as in the Doherty amplifier. The unmodulated radio frequency carrier is applied to the control grids of both tubes. Carrier modulation is applied to the screen grids of both tubes. The bias point of the carrier and peak tubes is different, and is established such that the peak tube is cutoff when modulation is absent (and the amplifier is producing rated unmodulated carrier power) whereas both tubes contribute twice the rated carrier power during 100% modulation (as four times the carrier power is required to achieve 100% modulation). As both tubes operate in class C, a significant improvement in efficiency is thereby achieved in the final stage. In addition, as the tetrode carrier and peak tubes require very little drive power, a significant improvement in efficiency within the driver stage is achieved as well (317C, et al.). The released version of the Sainton amplifier employs a cathode-follower modulator, not a push–pull modulator. Previous Continental Electronics designs, by James O. Weldon and others, retained most of the characteristics of the Doherty amplifier but added screen-grid modulation of the driver (317B, et al.). The Doherty amplifier remains in use in very-high-power AM transmitters, but for lower-power AM transmitters, vacuum-tube amplifiers in general were eclipsed in the 1980s by arrays of solid-state amplifiers, which could be switched on and off with much finer granularity in response to the requirements of the input audio. However, interest in the Doherty configuration has been revived by cellular-telephone and wireless-Internet applications where the sum of several constant envelope users creates an aggregate AM result. The main challenge of the Doherty amplifier for digital transmission modes is in aligning the two stages and getting the class-C amplifier to turn on and off very quickly. Recently, Doherty amplifiers have found widespread use in cellular base station transmitters for GHz frequencies. Implementations for transmitters in mobile devices have also been demonstrated. Amplifiers are implemented using active elements of different kinds: - The first active elements were relays. They were for example used in transcontinental telegraph lines: a weak current was used to switch the voltage of a battery to the outgoing line. - For transmitting audio, carbon microphones were used as the active element. This was used to modulate a radio-frequency source in one of the first AM audio transmissions, by Reginald Fessenden on Dec. 24, 1906. - Power control circuitry used magnetic amplifiers until the latter half of the twentieth century when high power FETs, and their easy interfacing to the newly developed digital circuitry, took over. - Audio and most low power amplifiers used vacuum tubes exclusively until the 1960s. Today, tubes are used for specialist audio applications such as guitar amplifiers and audiophile amplifiers. Many broadcast transmitters still use vacuum tubes. - In the 1960s, the transistor started to take over. These days, discrete transistors are still used in high-power amplifiers and in specialist audio devices. - Beginning in the 1970s, more and more transistors were connected on a single chip therefore creating the integrated circuit. A large number of amplifiers commercially available today are based on integrated circuits. For special purposes, other active elements have been used. For example, in the early days of the satellite communication, parametric amplifiers were used. The core circuit was a diode whose capacitance was changed by an RF signal created locally. Under certain conditions, this RF signal provided energy that was modulated by the extremely weak satellite signal received at the earth station. The practical amplifier circuit to the right could be the basis for a moderate-power audio amplifier. It features a typical (though substantially simplified) design as found in modern amplifiers, with a class-AB push–pull output stage, and uses some overall negative feedback. Bipolar transistors are shown, but this design would also be realizable with FETs or valves. The input signal is coupled through capacitor C1 to the base of transistor Q1. The capacitor allows the AC signal to pass, but blocks the DC bias voltage established by resistors R1 and R2 so that any preceding circuit is not affected by it. Q1 and Q2 form a differential amplifier (an amplifier that multiplies the difference between two inputs by some constant), in an arrangement known as a long-tailed pair. This arrangement is used to conveniently allow the use of negative feedback, which is fed from the output to Q2 via R7 and R8. The negative feedback into the difference amplifier allows the amplifier to compare the input to the actual output. The amplified signal from Q1 is directly fed to the second stage, Q3, which is a common emitter stage that provides further amplification of the signal and the DC bias for the output stages, Q4 and Q5. R6 provides the load for Q3 (a better design would probably use some form of active load here, such as a constant-current sink). So far, all of the amplifier is operating in class A. The output pair are arranged in class-AB push–pull, also called a complementary pair. They provide the majority of the current amplification (while consuming low quiescent current) and directly drive the load, connected via DC-blocking capacitor C2. The diodes D1 and D2 provide a small amount of constant voltage bias for the output pair, just biasing them into the conducting state so that crossover distortion is minimized. That is, the diodes push the output stage firmly into class-AB mode (assuming that the base-emitter drop of the output transistors is reduced by heat dissipation). This design is simple, but a good basis for a practical design because it automatically stabilises its operating point, since feedback internally operates from DC up through the audio range and beyond. Further circuit elements would probably be found in a real design that would roll-off the frequency response above the needed range to prevent the possibility of unwanted oscillation. Also, the use of fixed diode bias as shown here can cause problems if the diodes are not both electrically and thermally matched to the output transistors – if the output transistors turn on too much, they can easily overheat and destroy themselves, as the full current from the power supply is not limited at this stage. A common solution to help stabilise the output devices is to include some emitter resistors, typically one ohm or so. Calculating the values of the circuit's resistors and capacitors is done based on the components employed and the intended use of the amp. Two most common circuits: - A Cascode amplifier is a two-stage circuit consisting of a transconductance amplifier followed by a buffer amplifier. - A Log amplifier is a linear circuit in which output voltage is a constant times the natural logarithm of input. For the basics of radio frequency amplifiers using valves, see Valved RF amplifiers. Notes on implementation Real world amplifiers are imperfect. - The power supply may influence the output, so must be considered in the design. - A power amplifier is effectively an input signal controlled power regulator. It regulates the power sourced from the power supply or mains to the amplifier's load. The power output from a power amplifier cannot exceed the power input to it. - The amplifier circuit has an "open loop" performance. This is described by various parameters (gain, slew rate, output impedance, distortion, bandwidth, signal to noise ratio, etc.). - Many modern amplifiers use negative feedback techniques to hold the gain at the desired value and reduce distortion. Negative loop feedback has the intended effect of electrically damping loudspeaker motion, thereby damping the mechanical dynamic performance of the loudspeaker. - When assessing rated amplifier power output, it is useful to consider the applied load, the signal type (e.g., speech or music), required power output duration (i.e., short-time or continuous), and required dynamic range (e.g., recorded or live audio). - In high-powered audio applications that require long cables to the load (e.g., cinemas and shopping centres) it may be more efficient to connect to the load at line output voltage, with matching transformers at source and loads. This avoids long runs of heavy speaker cables. - Prevent instability or overheating requires care to ensure solid state amplifiers are adequately loaded. Most have a rated minimum load impedance. - A summing circuit is typical in applications that must combine many inputs or channels to form a composite output. It is best to combine multiple channels for this. - All amplifiers generate heat through electrical losses. The amplifier must dissipate this heat via convection or forced air cooling. Heat can damage or reduce electronic component service life. Designers and installers must also consider heating effects on adjacent equipment. Different power supply types result in many different methods of bias. Bias is a technique by which active devices are set to operate in a particular region, or by which the DC component of the output signal is set to the midpoint between the maximum voltages available from the power supply. Most amplifiers use several devices at each stage; they are typically matched in specifications except for polarity. Matched inverted polarity devices are called complementary pairs. Class-A amplifiers generally use only one device, unless the power supply is set to provide both positive and negative voltages, in which case a dual device symmetrical design may be used. Class-C amplifiers, by definition, use a single polarity supply. Amplifiers often have multiple stages in cascade to increase gain. Each stage of these designs may be a different type of amp to suit the needs of that stage. For instance, the first stage might be a class-A stage, feeding a class-AB push–pull second stage, which then drives a class-G final output stage, taking advantage of the strengths of each type, while minimizing their weaknesses. - Charge transfer amplifier - Distributed amplifier - Faithful amplification - Guitar amplifier - Instrument amplifier - Instrumentation amplifier - Low noise amplifier - Magnetic amplifier - Negative feedback amplifier - Operational amplifier - Optical amplifier - Power added efficiency - Programmable gain amplifier - RF power amplifier - Valve audio amplifier - Patronis, Gene (1987). "Amplifiers". In Glen Ballou. Handbook for Sound Engineers: The New Audio Cyclopedia. Howard W. Sams & Co. p. 493. ISBN 0-672-21983-2. - Harper, Douglas (2001). "Amplify". Online Etymology Dictionary. Etymonline.com. Retrieved July 10, 2015. - Robert Boylestad and Louis Nashelsky (1996). Electronic Devices and Circuit Theory, 7th Edition. Prentice Hall College Division. ISBN 978-0-13-375734-7. - Mark Cherry, Maxim Engineering journal, volume 62, Amplifier Considerations in Ceramic Speaker Applications, p.3, accessed 2012-10-01 - Robert S. Symons (1998). "Tubes: Still vital after all these years". IEEE Spectrum 35 (4): 52–63. doi:10.1109/6.666962. - Rood, George. "Music Concerns Seek New Volume With Amplifier". New York Times. Retrieved 23 February 2015. - "Amplifier Fills Need in Picture: Loud Speaker Only Method Found to Carry Directions During Turmoil". Los Angeles Times. - It is a curiosity to note that this table is a "Zwicky box"; in particular, it encompasses all possibilities. See Fritz Zwicky. - John Everett (1992). Vsats: Very Small Aperture Terminals. IET. ISBN 0-86341-200-9. - Roy, Apratim; Rashid, S. M. S. (5 June 2012). "A power efficient bandwidth regulation technique for a low-noise high-gain RF wideband amplifier". Central European Journal of Engineering 2 (3): 383–391. Bibcode:2012CEJE....2..383R. doi:10.2478/s13531-012-0009-1. - RCA Receiving Tube Manual, RC-14 (1940) p 12 - ARRL Handbook, 1968; page 65 - Jerry Del Colliano (20 February 2012), Pass Labs XA30.5 Class-A Stereo Amp Reviewed, Home Theater Review, Luxury Publishing Group Inc. - Ask the Doctors: Tube vs. Solid-State Harmonics - Volume cranked up in amp debate - A.P. Malvino, Electronic Principles (2nd Ed.1979. ISBN 0-07-039867-4) p.299. - Electronic and Radio Engineering, R.P.Terman, McGraw Hill, 1964 - N. O. Sokal and A. D. Sokal, "Class E – A New Class of High-Efficiency Tuned Single-Ended Switching Power Amplifiers", IEEE Journal of Solid-State Circuits, vol. SC-10, pp. 168–176, June 1975. HVK - US patent 2210028, William H. Doherty, "Amplifier", issued 1940-08-06, assigned to Bell Telephone Laboratories - US patent 3314034, Joseph B. Sainton, "High Efficiency Amplifier and Push–Pull Modulator", issued 1967-04-11, assigned to Continental Electronics Manufacturing Company - Lee, Thomas (2004). The Design of CMOS Radio-Frequency Integrated Circuits. New York, NY: Cambridge University Press. p. 8. ISBN 978-0-521-83539-8. - Malina, Roger. "Visual Art, Sound, Music and Technology". - Shortess, George. "Interactive Sound Installations Using Microcomputers". |Wikimedia Commons has media related to Electronic amplifiers.| - Rane audio's guide to amplifier classes - Design and analysis of a basic class D amplifier - Conversion: distortion factor to distortion attenuation and THD - An alternate topology called the grounded bridge amplifier - pdf - Contains an explanation of different amplifier classes - pdf - Reinventing the power amplifier - pdf - Anatomy of the power amplifier, including information about classes - Tons of Tones - Site explaining non linear distortion stages in Amplifier Models - Class D audio amplifiers, white paper - pdf - Class E Radio Transmitters - Tutorials, Schematics, Examples, and Construction Details
https://en.wikipedia.org/wiki/Amplifier
4.25
When you look at an image of Mercury, it looks like a dry, airless world. But you might be surprised to know that Mercury does have an atmosphere. Not the kind of atmosphere that we have here on Earth, or even the thin atmosphere that surrounds Mars. But Mercury’s atmosphere is currently being studied by scientists, and the newly arrived MESSENGER spacecraft. Mercury’s original atmosphere dissipated shortly after the planet formed 4.6 billion years ago with the rest of the Solar System. This was because of Mercury’s lower gravity, and because it’s so close to the Sun and receives the constant buffeting from its solar wind. Its current atmosphere is almost negligible. What is Mercury’s atmosphere made of? It has a tenuous atmosphere made up of hydrogen, helium, oxygen, sodium, calcium, potassium and water vapor. Astronomers think this current atmosphere is constantly being replenished by a variety of sources: particles of the Sun’s solar wind, volcanic outgassing, radioactive decay of elements on Mercury’s surface and the dust and debris kicked up by micrometeorites constantly buffeting its surface. Without these sources of replenishment, Mercury’s atmosphere would be carried away by the the solar wind relatively quickly. Mercury atmospheric composition: In 2008, NASA’s MESSENGER spacecraft discovered water vapor in Mercury’s atmosphere. It’s thought that this water is created when hydrogen and oxygen atoms meet in the atmosphere. Two of those components are possible indicators of life as we know it: methane and water vapor(indirectly). Water or water ice is believed to be a necessary component for life. The presence of water vapor in the atmosphere of Mercury indicates that there is water or water ice somewhere on the planet. Evidence of water ice has been found at the poles where the bottoms of craters are never exposed to light. Sometimes, methane is a byproduct of waste from living organisms. The methane in Mercury’s atmosphere is believed to come from volcanism, geothermal processes, and hydrothermal activity. Methane is an unstable gas and requires a constant and very active source, because studies have shown that the methane is destroyed in less than on Earth year. It is thought that it originates from peroxides and perchlorates in the soil or that it condenses and evaporates seasonally from clathrates. Despite how small the Mercurian atmosphere is, it has been broken down into four components by NASA scientists. Those components are the lower, middle, upper, and exosphere. The lower atmosphere is a warm region(around 210 K). It is warmed by the combination of airborne dust(1.5 micrometers in diameter) and heat radiated from the surface. This airborne dust gives the planet its ruddy brown appearance. The middle atmosphere contains a jetstream like Earth’s. The upper atmosphere is heated by the solar wind and the temperatures are much higher than at the surface. The higher temperatures separate the gases. The exosphere starts at about 200 km and has no clear end. It just tapers off into space. While that may sound like a lot of atmosphere separating the planet from the solar wind and ultraviolet radiation, it is not. Helping Mercury hold on to its atmosphere is its magnetic field. While gravity helps hold the gases to the surface, the magnetic filed helps to deflect the solar wind around the planet, much like it does here on Earth. This deflection allows a smaller gravitational pull to hold some form of an atmosphere. The atmosphere of Mercury is one of the most tenuous in the Solar System. The solar wind still blows much of it away, so sources on the planet are constantly replenishing it. Hopefully, the MESSENGER spacecraft will help to discover those sources and increase our knowledge of the innermost planet. We have written many articles about Mercury’s atmosphere for Universe Today. Here’s an article about how magnetic tornadoes might regenerate Mercury’s atmosphere, and here’s an article about the climate of Mercury. We have also recorded an entire episode of Astronomy Cast all about atmospheres. Listen here, Episode 151: Atmospheres.
http://www.universetoday.com/22088/atmosphere-of-mercury/
4.1875
In an ellipse the sum of the focal distances is constant; and in an hyperbola the difference of the focal distances is constant. An oval is never mistaken for a circle, nor an hyperbola for an ellipsis. But after you have demonstrated to him the properties of the hyperbola and its asymptote, the apparent absurdity vanishes. The curve is in this case called an hyperbola (see fig. 20). In the hyperbola we have the mathematical demonstration of the error of an axiom. Two of the sides of the triangle in this proposition constitute a special form of the hyperbola. These curves—the ellipse, the parabola, hyperbola—play a large part in the subsequent history of astronomy and mechanics. The axes of an hyperbola bisect the angles between the asymptotes. If the cone is cut off vertically on the dotted line, A, the curve is a hyperbola. With a certain speed it will assume the parabola, and with a greater the hyperbola. 1660s, from Latinized form of Greek hyperbole "extravagance," literally "a throwing beyond" (see hyperbole). Perhaps so called because the inclination of the plane to the base of the cone exceeds that of the side of the cone.
http://dictionary.reference.com/browse/hyperbola
4.1875
A hydrothermal vent is a fissure in a planet's surface from which geothermally heated water issues. Hydrothermal vents are commonly found near volcanically active places, areas where tectonic plates are moving apart, ocean basins, and hotspots. Hydrothermal vents exist because the earth is both geologically active and has large amounts of water on its surface and within its crust. Common land types include hot springs, fumaroles and geysers. Under the sea, hydrothermal vents may form features called black smokers. Relative to the majority of the deep sea, the areas around submarine hydrothermal vents are biologically more productive, often hosting complex communities fueled by the chemicals dissolved in the vent fluids. Chemosynthetic bacteria and archaea form the base of the food chain, supporting diverse organisms, including giant tube worms, clams, limpets and shrimp. Active hydrothermal vents are believed to exist on Jupiter's moon Europa, and Saturn's moon Enceladus, and ancient hydrothermal vents have been speculated to exist on Mars. Hydrothermal vents in the deep ocean typically form along the mid-ocean ridges, such as the East Pacific Rise and the Mid-Atlantic Ridge. These are locations where two tectonic plates are diverging and new crust is being formed. The water that issues from seafloor hydrothermal vents consists mostly of sea water drawn into the hydrothermal system close to the volcanic edifice through faults and porous sediments or volcanic strata, plus some magmatic water released by the upwelling magma. In terrestrial hydrothermal systems, the majority of water circulated within the fumarole and geyser systems is meteoric water plus ground water that has percolated down into the thermal system from the surface, but it also commonly contains some portion of metamorphic water, magmatic water, and sedimentary formational brine that is released by the magma. The proportion of each varies from location to location. In contrast to the approximately 2 °C ambient water temperature at these depths, water emerges from these vents at temperatures ranging from 60 to as high as 464 °C. Due to the high hydrostatic pressure at these depths, water may exist in either its liquid form or as a supercritical fluid at such temperatures. The critical point of (pure) water is 375 °C at a pressure of 218 atmospheres. However, introducing salinity into the fluid raises the critical point to higher temperatures and pressures. The critical point of seawater (3.2 wt. % NaCl) is 407 °C and 298.5 bars, corresponding to a depth of ~2960 m below sea level. Accordingly, if a hydrothermal fluid with a salinity of 3.2 wt. % NaCl vents above 407 °C and 298.5 bars, it is supercritical. Furthermore, the salinity of vent fluids have been shown to vary widely due to phase separation in the crust. The critical point for lower salinity fluids is at lower temperature and pressure conditions than that for seawater, but higher than that for pure water. For example, a vent fluid with a 2.24 wt. % NaCl salinity has the critical point at 400 °C and 280.5 bars. Thus, water emerging from the hottest parts of some hydrothermal vents can be a supercritical fluid, possessing physical properties between those of a gas and those of a liquid. Examples of supercritical venting are found at several sites. Sister Peak (Comfortless Cove Hydrothermal Field, phase-separated, vapor-type fluids. Sustained venting was not found to be supercritical but a brief injection of 464 °C was well above supercritical conditions. A nearby site, Turtle Pits, was found to vent low salinity fluid at 407 °C, which is above the critical point of the fluid at that salinity. A vent site in the Cayman Trough named Beebe, which is the world's deepest known hydrothermal site at ~5000 m below sea level, has shown sustained supercritical venting at 401 °C and 2.3 wt% NaCl., elevation -2996 m), vents low salinity Although supercritical conditions have been observed at several sites, it is not yet known what significance, if any, supercritical venting has in terms of hydrothermal circulation, mineral deposit formation, geochemical fluxes or biological activity. The initial stages of a vent chimney begin with the deposition of the mineral anhydrite. Sulfides of copper, iron, and zinc then precipitate in the chimney gaps, making it less porous over the course of time. Vent growths on the order of 30 cm per day have been recorded. An April 2007 exploration of the deep-sea vents off the coast of Fiji found those vents to be a significant source of dissolved iron. Black smokers and white smokers Some hydrothermal vents form roughly cylindrical chimney structures. These form from minerals that are dissolved in the vent fluid. When the superheated water contacts the near-freezing sea water, the minerals precipitate out to form particles which add to the height of the stacks. Some of these chimney structures can reach heights of 60 m. An example of such a towering vent was "Godzilla", a structure in the Pacific Ocean near Oregon that rose to 40 m before it fell over in 1996. A black smoker or sea vent is a type of hydrothermal vent found on the seabed, typically in the abyssal and hadal zones. They appear as black, chimney-like structures that emit a cloud of black material. Black smokers typically emit particles with high levels of sulfur-bearing minerals, or sulfides. Black smokers are formed in fields hundreds of meters wide when superheated water from below Earth's crust comes through the ocean floor. This water is rich in dissolved minerals from the crust, most notably sulfides. When it comes in contact with cold ocean water, many minerals precipitate, forming a black, chimney-like structure around each vent. The deposited metal sulfides can become massive sulfide ore deposits in time. Black smokers were first discovered in 1977 on the East Pacific Rise by scientists from Scripps Institution of Oceanography. They were observed using a deep submergence vehicle called ALVIN belonging to the Woods Hole Oceanographic Institution. Now, black smokers are known to exist in the Atlantic and Pacific Oceans, at an average depth of 2100 metres. The most northerly black smokers are a cluster of five named Loki's Castle, discovered in 2008 by scientists from the University of Bergen at 73°N, on the Mid-Atlantic Ridge between Greenland and Norway. These black smokers are of interest as they are in a more stable area of the Earth's crust, where tectonic forces are less and consequently fields of hydrothermal vents are less common. The world's deepest known black smokers are located in the Cayman Trough, 5,000 m (3.1 miles) below the ocean's surface. White smoker vents emit lighter-hued minerals, such as those containing barium, calcium and silicon. These vents also tend to have lower temperature plumes. Life has traditionally been seen as driven by energy from the sun, but deep-sea organisms have no access to sunlight, so they must depend on nutrients found in the dusty chemical deposits and hydrothermal fluids in which they live. Previously, benthic oceanographers assumed that vent organisms were dependent on marine snow, as deep-sea organisms are. This would leave them dependent on plant life and thus the sun. Some hydrothermal vent organisms do consume this "rain", but with only such a system, life forms would be very sparse. Compared to the surrounding sea floor, however, hydrothermal vent zones have a density of organisms 10,000 to 100,000 times greater. Hydrothermal vent communities are able to sustain such vast amounts of life because vent organisms depend on chemosynthetic bacteria for food. The water from the hydrothermal vent is rich in dissolved minerals and supports a large population of chemoautotrophic bacteria. These bacteria use sulfur compounds, particularly hydrogen sulfide, a chemical highly toxic to most known organisms, to produce organic material through the process of chemosynthesis. The ecosystem so formed is reliant upon the continued existence of the hydrothermal vent field as the primary source of energy, which differs from most surface life on Earth, which is based on solar energy. However, although it is often said that these communities exist independently of the sun, some of the organisms are actually dependent upon oxygen produced by photosynthetic organisms, while others are anaerobic. The chemosynthetic bacteria grow into a thick mat which attracts other organisms, such as amphipods and copepods, which graze upon the bacteria directly. Larger organisms, such as snails, shrimp, crabs, tube worms, fish (especially eelpout, cutthroat eel, ophidiiforms and Symphurus thermophilus), and octopi (notably Vulcanoctopus hydrothermalis), form a food chain of predator and prey relationships above the primary consumers. The main families of organisms found around seafloor vents are annelids, pogonophorans, gastropods, and crustaceans, with large bivalves, vestimentiferan worms, and "eyeless" shrimp making up the bulk of nonmicrobial organisms. Siboglinid tube worms, which may grow to over 2 m (6.6 ft) tall in the largest species, often form an important part of the community around a hydrothermal vent. They have no mouth or digestive tract, and like parasitic worms, absorb nutrients produced by the bacteria in their tissues. About 285 billion bacteria are found per ounce of tubeworm tissue. Tubeworms have red plumes which contain hemoglobin. Hemoglobin combines with hydrogen sulfide and transfers it to the bacteria living inside the worm. In return, the bacteria nourish the worm with carbon compounds. Two of the species that inhabit a hydrothermal vent are Tevnia jerichonana, and Riftia pachyptila. One discovered community, dubbed "Eel City", consists predominantly of the eel Dysommina rugosa. Though eels are not uncommon, invertebrates typically dominate hydrothermal vents. Eel City is located near Nafanua volcanic cone, American Samoa. Other examples of the unique fauna which inhabit this ecosystem are the scaly-foot gastropod Crysomallon squamiferum, a species of snail with a foot reinforced by scales made of iron and organic materials, and the Pompeii worm Alvinella pompejana, which is capable of withstanding temperatures up to 80 °C (176 °F). In 1993, already more than 100 gastropod species were known to occur in hydrothermal vents. Over 300 new species have been discovered at hydrothermal vents, many of them "sister species" to others found in geographically separated vent areas. It has been proposed that before the North American plate overrode the mid-ocean ridge, there was a single biogeographic vent region found in the eastern Pacific. The subsequent barrier to travel began the evolutionary divergence of species in different locations. The examples of convergent evolution seen between distinct hydrothermal vents is seen as major support for the theory of natural selection and of evolution as a whole. Although life is very sparse at these depths, black smokers are the centers of entire ecosystems. Sunlight is nonexistent, so many organisms – such as archaea and extremophiles – convert the heat, methane, and sulfur compounds provided by black smokers into energy through a process called chemosynthesis. More complex life forms, such as clams and tubeworms, feed on these organisms. The organisms at the base of the food chain also deposit minerals into the base of the black smoker, therefore completing the life cycle. A species of phototrophic bacterium has been found living near a black smoker off the coast of Mexico at a depth of 2,500 m (8,200 ft). No sunlight penetrates that far into the waters. Instead, the bacteria, part of the Chlorobiaceae family, use the faint glow from the black smoker for photosynthesis. This is the first organism discovered in nature to exclusively use a light other than sunlight for photosynthesis. New and unusual species are constantly being discovered in the neighborhood of black smokers. The Pompeii worm was found in the 1980s, and a scaly-foot gastropod in 2001 during an expedition to the Indian Ocean's Kairei hydrothermal vent field. The latter uses iron sulfides (pyrite and greigite) for the structure of its dermal sclerites (hardened body parts), instead of calcium carbonate. The extreme pressure of 2500 m of water (approximately 25 megapascals or 250 atmospheres) is thought to play a role in stabilizing iron sulfide for biological purposes. This armor plating probably serves as a defense against the venomous radula (teeth) of predatory snails in that community. Although the discovery of hydrothermal vents is a relatively recent event in the history of science, the importance of this discovery has given rise to, and supported, new biological and bio-atmospheric theories. The deep hot biosphere At the beginning of his 1992 paper The Deep Hot Biosphere, Thomas Gold referred to ocean vents in support of his theory that the lower levels of the earth are rich in living biological material that finds its way to the surface. He further expanded his ideas in the book The Deep Hot Biosphere. An article on abiogenic hydrocarbon production in the February 2008 issue of Science journal used data from experiments at the Lost City hydrothermal field to report how the abiotic synthesis of low molecular mass hydrocarbons from mantle derived carbon dioxide may occur in the presence of ultramafic rocks, water, and moderate amounts of heat. Hydrothermal origin of life Günter Wächtershäuser proposed the iron-sulfur world theory and suggested that life might have originated at hydrothermal vents. Wächtershäuser proposed that an early form of metabolism predated genetics. By metabolism he meant a cycle of chemical reactions that release energy in a form that can be harnessed by other processes. It has been proposed that amino acid synthesis could have occurred deep in the Earth's crust and that these amino acids were subsequently shot up along with hydrothermal fluids into cooler waters, where lower temperatures and the presence of clay minerals would have fostered the formation of peptides and protocells. This is an attractive hypothesis because of the abundance of CH4 (methane) and NH3 (ammonia) present in hydrothermal vent regions, a condition that was not provided by the Earth's primitive atmosphere. A major limitation to this hypothesis is the lack of stability of organic molecules at high temperatures, but some have suggested that life would have originated outside of the zones of highest temperature. There are numerous species of extremophiles and other organisms currently living immediately around deep-sea vents, suggesting that this is indeed a possible scenario. Experimental research and computing modeling indicate that the surfaces of mineral particles inside hydrothermal vents have similar catalytic properties to enzymes and are able to create simple organic molecules, such as methanol (CH3OH) and formic acid (HCO2H), out of the dissolved CO2 in the water. In 1949, a deep water survey reported anomalously hot brines in the central portion of the Red Sea. Later work in the 1960s confirmed the presence of hot, 60 °C (140 °F), saline brines and associated metalliferous muds. The hot solutions were emanating from an active subseafloor rift. The highly saline character of the waters was not hospitable to living organisms. The brines and associated muds are currently under investigation as a source of mineable precious and base metals. The chemosynthetic ecosystem surrounding submarine hydrothermal vents were discovered along the Galapagos Rift, a spur of the East Pacific Rise, in 1977 by a group of marine geologists led by Richard Von Herzen and Robert Ballard of Woods Hole Oceanographic Institution (WHOI) using the DSV Alvin, an ONR research submersible from WHOI. In 1979, a team of biologists led by J. Frederick Grassle, at the time at WHOI, returned to the same location to investigate the biological communities discovered two year earlier. In that same year, Peter Lonsdale published the first scientific paper on hydrothermal vent life. In 2005, Neptune Resources NL, a mineral exploration company, applied for and was granted 35,000 km² of exploration rights over the Kermadec Arc in New Zealand's Exclusive Economic Zone to explore for seafloor massive sulfide deposits, a potential new source of lead-zinc-copper sulfides formed from modern hydrothermal vent fields. The discovery of a vent in the Pacific Ocean offshore of Costa Rica, named the Medusa hydrothermal vent field (after the serpent-haired Medusa of Greek mythology), was announced in April 2007. The Ashadze hydrothermal field (13°N on the Mid-Atlantic Ridge, elevation -4200 m) was the deepest known high-temperature hydrothermal field until 2010, when a hydrothermal plume emanating from the Beebe site ( , elevation -5000 m) was detected by a group of scientists from NASA Jet Propulsion Laboratory and Woods Hole Oceanographic Institute. This site is located on the 110 km long, ultraslow spreading Mid-Cayman Rise within the Cayman Trough. On February 21 the deepest known hydrothermal vents were discovered in the Caribbean at a depth of almost 5,000 metres (16,000 ft). Hydrothermal vents tend to be distributed along the Earth's plate boundaries, although they may also be found at intra-plate locations such as hotspot volcanoes. As of 2009 there were approximately 500 known active submarine hydrothermal vent fields, with about half visually observed at the seafloor and the other half suspected from water column indicators and/or seafloor deposits. The InterRidge program office hosts a global database for the locations of known active submarine hydrothermal vent fields. Hydrothermal vents, in some instances, have led to the formation of exploitable mineral resources via deposition of seafloor massive sulfide deposits. The Mount Isa orebody located in Queensland, Australia, is an excellent example. Many hydrothermal vents are rich in cobalt, gold, copper, and rare earth metals essential for electronic components. Recently, mineral exploration companies, driven by the elevated price activity in the base metals sector during the mid-2000s, have turned their attention to extraction of mineral resources from hydrothermal fields on the seafloor. Significant cost reductions are, in theory, possible. Two companies are currently engaged in the late stages of commencing to mine seafloor massive sulfides. Nautilus Minerals is in the advanced stages of commencing extraction from its Solwarra deposit, in the Bismarck Archipelago, and Neptune Minerals is at an earlier stage with its Rumble II West deposit, located on the Kermadec Arc, near the Kermadec Islands. Both companies are proposing using modified existing technology. Nautilus Minerals, in partnership with Placer Dome (now part of Barrick Gold), succeeded in 2006 in returning over 10 metric tons of mined SMS to the surface using modified drum cutters mounted on an ROV, a world first. Neptune Minerals in 2007 succeeded in recovering SMS sediment samples using a modified oil industry suction pump mounted on an ROV, also a world first. Potential seafloor mining has environmental impacts including dust plumes from mining machinery affecting filter feeding organisms, collapsing or reopening vents, methane clathrate release, or even sub-oceanic land slides. A large amount of work is currently being engaged in by both the above-mentioned companies to ensure that potential environmental impacts of seafloor mining are well understood and control measures are implemented, before exploitation commences. Attempts have been made in the past to exploit minerals from the seafloor. The 1960s and 70s saw a great deal of activity (and expenditure) in the recovery of manganese nodules from the abyssal plains, with varying degrees of success. This does demonstrate however that recovery of minerals from the seafloor is possible, and has been possible for some time. Interestingly, mining of manganese nodules served as a cover story for the elaborate attempt by the CIA to raise the sunken Soviet submarine K-129, using the Glomar Explorer, a ship purpose built for the task by Howard Hughes. The operation was known as Project Azorian, and the cover story of seafloor mining of manganese nodules may have served as the impetus to propel other companies to make the attempt. The conservation of hydrothermal vents has been the subject of sometimes heated discussion in the Oceanographic Community for the last 20 years. It has been pointed out that it may be that those causing the most damage to these fairly rare habitats are scientists. There have been attempts to forge agreements over the behaviour of scientists investigating vent sites but although there is an agreed code of practice there is as yet no formal international and legally binding agreement. - "Spacecraft Data Suggest Saturn Moon's Ocean May Harbor Hydrothermal Activity". NASA. 11 March 2015. Retrieved 12 March 2015. - Paine, M. (15 May 2001). "Mars Explorers to Benefit from Australian Research". Space.com. - Haase, K. M.; et al. (2007). "Young volcanism and related hydrothermal activity at 5°S on the slow-spreading southern Mid-Atlantic Ridge". Geochemistry Geophysics Geosystems 8 (11): Q11002. Bibcode:2007GGG.....811002H. doi:10.1029/2006GC001509. - Haase, K. M.; et al. (2009). "Fluid compositions and mineralogy of precipitates from Mid Atlantic Ridge hydrothermal vents at 4°48'S". PANGAEA. doi:10.1594/PANGAEA.727454. - Bischoff, James L; Rosenbauer, Robert J. "Liquid-vapor relations in the critical region of the system NaCl-H2O from 380 to 415°C: A refined determination of the critical point and two-phase boundary of seawater". Geochimica et Cosmochimica Acta 52 (8): 2121–2126. doi:10.1016/0016-7037(88)90192-5. - Von Damm, K L. "Seafloor Hydrothermal Activity: Black Smoker Chemistry and Chimneys". Annual Review of Earth and Planetary Sciences 18 (1): 173–204. doi:10.1146/annurev.ea.18.050190.001133. - Webber, A.P.; Murton, B.; Roberts, S.; Hodgkinson, M. "Supercritical Venting and VMS Formation at the Beebe Hydrothermal Field, Cayman Spreading Centre". Goldschmidt Conference Abstracts 2014. Geochemical Society. Retrieved 29 July 2014. - Tivey, M. K. (1 December 1998). "How to Build a Black Smoker Chimney: The Formation of Mineral Deposits At Mid-Ocean Ridges". Woods Hole Oceanographic Institution. Retrieved 2006-07-07. - "Tracking Ocean Iron". Chemical & Engineering News 86 (35): 62. 2008. doi:10.1021/cen-v086n003.p062. - Perkins, S. (2001). "New type of hydrothermal vent looms large". Science News 160 (2): 21. doi:10.2307/4012715. JSTOR 4012715. - Deborah S. Kelley. Black Smokers: Incubators on the Seafloor. p. 2. http://www.amnh.org/learn/pd/earth/pdf/black_smokers_incubators.pdf - "Boiling Hot Water Found in Frigid Arctic Sea". LiveScience. 24 July 2008. Retrieved 2008-07-25. - "Scientists Break Record By Finding Northernmost Hydrothermal Vent Field". Science Daily. 24 July 2008. Retrieved 2008-07-25. - Cross, A. (12 April 2010). "World's deepest undersea vents discovered in Caribbean". BBC News. Retrieved 2010-04-13. - "Extremes of Eel City". Astrobiology Magazine. 28 May 2008. Retrieved 2007-08-30. - Sysoev, A. V.; Kantor, Yu. I. (1995). "Two new species of Phymorhynchus (Gastropoda, Conoidea, Conidae) from the hydrothermal vents" (PDF). Ruthenica 5: 17–26. - Botos, S. "Life on a hydrothermal vent". Hydrothermal Vent Communities. - Van Dover, C. L. "Hot Topics: Biogeography of deep-sea hydrothermal vent faunas". Woods Hole Oceanographic Institution. - Beatty, J.T.; et al. (2005). "An obligately photosynthetic bacterial anaerobe from a deep-sea hydrothermal vent". Proceedings of the National Academy of Sciences 102 (26): 9306–10. Bibcode:2005PNAS..102.9306B. doi:10.1073/pnas.0503674102. PMC 1166624. PMID 15967984. - Gold, T. (1992). "The Deep Hot Biosphere". Proceedings of National Academy of Sciences 89 (13): 6045–9. Bibcode:1992PNAS...89.6045G. doi:10.1073/pnas.89.13.6045. PMC 49434. PMID 1631089. - Gold, T. (1999). The Deep Hot Biosphere. Springer. ISBN 0-387-95253-5. - Proskurowski, G.; et al. (2008). "Abiogenic Hydrocarbon Production at Lost City Hydrothermal Field". Science 319 (5863): 604–7. doi:10.1126/science.1151194. PMID 18239121. - Wächtershäuser, G. (1990). "Evolution of the First Metabolic Cycles" (PDF). Proceedings of National Academy of Sciences 87 (1): 200–4. Bibcode:1990PNAS...87..200W. doi:10.1073/pnas.87.1.200. PMC 53229. PMID 2296579. - Tunnicliffe, V. (1991). "The Biology of Hydrothermal Vents: Ecology and Evolution". Oceanography and Marine Biology an Annual Review 29: 319–408. - Chemistry of seabed’s hot vents could explain emergence of life. Astrobiology Magazine 27 April 2015. - "Bio-inspired CO2 conversion by iron sulfide catalysts under sustainable conditions. (PDF) Nora H. de Leeuw, et. al. Chemical Communications, 2015, 51, 7501-7504. DOI: 10.1039/C5CC02078F. 24 March 2015. - Degens, E. T. (1969). Hot Brines and Recent Heavy Metal Deposits in the Red Sea. Springer-Verlag. - "Dive and Discover: Expeditions to the Seafloor". www.divediscover.whoi.edu. Retrieved 2016-01-04. - Lonsdale, P. (1977). "Clustering of suspension-feeding macrobenthos near abyssal hydrothermal vents at oceanic spreading centers". Deep Sea Research 24 (9): 857. Bibcode:1977DSR....24..857L. doi:10.1016/0146-6291(77)90478-7. - "New undersea vent suggests snake-headed mythology" (Press release). EurekAlert!. 18 April 2007. Retrieved 2007-04-18. - "Beebe". Interridge Vents Database. - German, C. R.; et al. (2010). "Diverse styles of submarine venting on the ultraslow spreading Mid-Cayman Rise" (PDF). Proceedings of the National Academy of Sciences 107 (32): 14020–5. Bibcode:2010PNAS..10714020G. doi:10.1073/pnas.1009205107. PMC 2922602. PMID 20660317. Retrieved 2010-12-31. Lay summary – SciGuru (11 October 2010). - "Deepest undersea vents discovered by UK team". BBC. 21 February 2013. Retrieved 21 February 2013. - Broad, William J. (2016-01-12). "The 40,000-Mile Volcano". The New York Times. ISSN 0362-4331. Retrieved 2016-01-17. - Beaulieu, S. E.; Baker, E. T.; German, C. R.; Maffei, A. R. (2013). "An authoritative global database for active submarine hydrothermal vent fields". Geochemistry Geophysics Geosystems 14: 4892–4905. doi:10.1002/2013GC004998. - Perkins, W. G. (1984). "Mount Isa silica dolomite and copper orebodies; the result of a syntectonic hydrothermal alteration system". Economic Geology 79 (4): 601. doi:10.2113/gsecongeo.79.4.601. - We Are About to Start Mining Hydrothermal Vents on the Ocean Floor. Nautilus; Brandon Keim. 12 September 2015. - "The dawn of deep ocean mining". The All I Need. 2006. - "Nautilus Outlines High Grade Au - Cu Seabed Sulphide Zone" (Press release). Nautilus Minerals. 25 May 2006. - "Neptune Minerals". Retrieved August 2, 2012. - Birney, K.; et al. "Potential Deep-Sea Mining of Seafloor Massive Sulfides: A case study in Papua New Guinea" (PDF). University of California, Santa Barbara, B. - "Treasures from the deep". Chemistry World (Royal Society of Chemistry). January 2007. - Devey, C.W.; Fisher, C.R.; Scott, S. (2007). "Responsible Science at Hydrothermal Vents" (PDF). Oceanography 20 (1): 162–72. doi:10.5670/oceanog.2007.90. - Johnson, M. (2005). "Oceans need protection from scientists too". Nature 433 (7022): 105. Bibcode:2005Natur.433..105J. doi:10.1038/433105a. PMID 15650716. - Johnson, M. (2005). "Deepsea vents should be world heritage sites". MPA News 6: 10. - Tyler, P.; German, C.; Tunnicliff, V. (2005). "Biologists do not pose a threat to deep-sea vents". Nature 434 (7029): 18. Bibcode:2005Natur.434...18T. doi:10.1038/434018b. PMID 15744272. - Van Dover CL, Humphris SE, Fornari D, Cavanaugh CM, Collier R, Goffredi SK, Hashimoto J, Lilley MD, Reysenbach AL, Shank TM, Von Damm KL, Banta A, Gallant RM, Gotz D, Green D, Hall J, Harmer TL, Hurtado LA, Johnson P, McKiness ZP, Meredith C, Olson E, Pan IL, Turnipseed M, Won Y, Young CR 3rd, Vrijenhoek RC (2001). "Biogeography and ecological setting of Indian Ocean hydrothermal vents". Science 294 (5543): 818–23. Bibcode:2001Sci...294..818V. doi:10.1126/science.1064574. PMID 11557843. - Van Dover, Cindy Lee (2000). The Ecology of Deep-Sea Hydrothermal Vents. Princeton University Press. ISBN 0-691-04929-7. - Beatty JT, Overmann J, Lince MT, Manske AK, Lang AS, Blankenship RE, Van Dover CL, Martinson TA, Plumley FG (2005). "An obligately photosynthetic bacterial anaerobe from a deep-sea hydrothermal vent". Proceedings of the National Academy of Sciences 102 (26): 9306–10. Bibcode:2005PNAS..102.9306B. doi:10.1073/pnas.0503674102. PMC 1166624. PMID 15967984. - Glyn Ford and Jonathan Simnett, Silver from the Sea, September/October 1982, Volume 33, Number 5, Saudi Aramco World Accessed 17 October 2005 - Ballard, Robert D., 2000, The Eternal Darkness, Princeton University Press. - Anaerobic respiration on tellurate and other metalloids in bacteria from hydrothermal vent fields in the eastern pacific ocean - Andrea Koschinsky, Dieter Garbe-Schönberg, Sylvia Sander, Katja Schmidt, Hans-Hermann Gennerich and Harald Strauss (August 2008). "Hydrothermal venting at pressure-temperature conditions above the critical point of seawater, 5°S on the Mid-Atlantic Ridge". Geology 36 (8): 615–618. doi:10.1130/G24726A.1. Retrieved 18 June 2010. - Catherine Brahic (4 August 2008). "Found: The hottest water on Earth". New Scientist. Retrieved 18 June 2010. External link in - Josh Hill (5 August 2008). "'Extreme Water' Found at Atlantic Ocean Abyss". The Daily Galaxy. Retrieved 18 June 2010. External link in |Wikimedia Commons has media related to Hydrothermal vents.| - Ocean Explorer (www.oceanexplorer.noaa.gov) - Public outreach site for explorations sponsored by the Office of Ocean Exploration. - Hydrothermal Vents Video - The Smithsonian Institution's Ocean Portal - Vent geochemistry - a good overview of hydrothermal vent biology, published in 2006 (PDF) - Images of Hydrothermal Vents in Indian Ocean- Released by National Science Foundation - How to Build a Hydrothermal Vent Chimney - NOAA, Ocean Explorer YouTube Channel
https://en.wikipedia.org/wiki/Black_smoker
4.1875
How to identify parallel lines, a line parallel to a plane, and two parallel planes. How to find the angle between planes, and how to determine if two planes are parallel or perpendicular. How to compute the sum of two vectors or the product of a scalar and a vector. How to write an equation for the coordinate planes or any plane that is parallel to one. How to find a vector normal (perpendicular) to a plane given an equation for the plane. Understanding the differences between vectors and scalar quantities. How to form sentences with parallel structure. How resistors in parallel affect current flow How capacitors in parallel affect current flow. How to plot complex numbers on the complex plane. How to determine whether two lines in space are parallel or perpendicular. How to take the converse of the parallel lines theorem. How to mark parallel lines, how to show lines are parallel, and how to compare skew and parallel lines. How to find additive and multiplicative inverses. How to describe and label point, line, and plane. How to define coplanar and collinear. Vocabulary of multiples and least common multiples
https://www.brightstorm.com/tag/scalar-multiple-parallel-planes/
4.21875
States of matter Have you ever baked—or purchased—a loaf of bread, muffins or cupcakes and admired the fluffy final product? If so, you have appreciated the work of expanding gases! They are everywhere—from the kitchen to the cosmos. You’ve sampled their pleasures every time you’ve eaten a slice of bread, bitten into a cookie or sipped a soda. In this science activity you’ll capture a gas in a stretchy container you’re probably pretty familiar with—a balloon! This will let you to observe how gases expand and contract as the temperature changes. Everything in the world around you is made up of matter, including an inflated balloon and what’s inside of it. Matter comes in four different forms, known as states, which go (generally) from lowest to highest energy. They are: solids, liquids, gases and plasmas. Gases, such as the air or helium inside a balloon, take the shape of the containers they’re in. They spread out so that the space is filled up evenly with gas molecules. The gas molecules are not connected. They move in a straight line until they bounce into another gas molecule or hit the container’s wall, and then they rebound and continue in another direction until they hit something else. The combined motion energy of all of the gas molecules in a container is called the average kinetic energy. This average kinetic (motional) energy changes in response to temperature. When gas molecules are warmed, their average kinetic energy also increases. This means they move faster and have more frequent and harder collisions inside of the balloon. When cooled, the kinetic energy of the gas molecules decreases, meaning they move more slowly and have less frequent and weaker collisions. - Freezer with some empty space - Two latex balloons that will inflate to approximately nine to 12 inches - Piece of string, at least 20 inches long - Permanent marker - Cloth tape measure. (A regular tape measure or ruler can also work, but a cloth tape measure is preferable.) - Scrap piece of paper and a pen or pencil - Clock or timer - A helper - Make sure your freezer has enough space to easily fit an inflated balloon inside. The balloon should not be smushed or squeezed at all. If you need to move food to make space, be sure to get permission from anybody who stores food in the freezer. Also make sure to avoid any pointy objects or parts of the freezer. - Blow up a balloon until it is mostly—but not completely—full. Then carefully tie it off with a knot. With your helper assisting you, measure the circumference of the widest part of the balloon using a cloth tape measure or a piece of string (and then measure the string against a tape measure). What is the balloon’s circumference? - Inflate another balloon so it looks about the same size as the first balloon, but don’t tie it off yet. Pinch the opening closed between your thumb and finger so the air cannot escape. Have your helper measure the circumference of the balloon, then adjust the amount of air inside until it is within about half an inch or less (plus or minus) of the first balloon’s circumference (by blowing in more air, or letting a little escape). Then tie off the second balloon. - Turn one of the balloons so you can look at the top of it. At the very top it should have a slightly darker spot. Using the permanent marker, carefully make a small spot in the center of the darker spot. - Then take a cloth tape measure (or use a piece of string and a regular tape measure or ruler) and carefully make two small lines with the permanent marker at the top of the balloon that are two and one half inches away from one another, with the darker spot as the midpoint. To do this you can center the tape measure so that its one-and-one-quarter-inch mark is on the small spot you made and then make a line at the zero and two-and-one-half-inch points. - Repeat this with the other balloon so that it also has lines that are two and one half inches apart on its top. - Somewhere on one balloon write the number “1” and on the other balloon write the number “2.” - Because it can be difficult to draw exact lines on a balloon with a thick permanent marker, now measure the exact distance between the two lines you drew on each balloon, measuring from the outside of both lines. (For example, the distance might be two and three eighths inches or two and five eighths inches.) Write this down for each balloon (with the balloon’s number) on a scrap piece of paper. Why do you think it’s important to be so exact when measuring the distances? - Put balloon number 1 in the freezer in the area you cleared out for it. Leave it in the freezer for 45 minutes. Do not disturb it or open the freezer during this time. How do you think the size of the balloon will change from being in the freezer? - During this time, leave balloon number 2 somewhere out at room temperature (not in direct sunlight or near a hot lamp). - After balloon number 1 has been in the freezer for 45 minutes, bring your cloth tape measure (or piece of string and regular tape measure) to the freezer and, with the balloon still in the freezer (but with the freezer door open to let you access the balloon), quickly measure the distance between the two lines as you did before. Did the distance between the two lines change? If so, how did it change? What does this tell you about whether the size of the balloon changed? Why do you think this is? - Then measure the distance between the two lines on balloon number 2, which stayed at room temperature. Did the distance between the two lines change? If so, how did it change? How did the balloon’s size change? Why do you think this is? - Overall, how did the balloon change size when placed in the freezer? What do your results tell you about how gases expand and contract as temperature changes? - Extra: After taking balloon number 1 out of the freezer leave it at room temperature for at least 45 minutes to let it warm up. Then remeasure the distance between the lines. How has the balloon changed size after warming up, if it changed at all? - Extra: Try this activity again but instead of putting balloon number 1 in the freezer, put it in a hot place for 45 minutes, such as outdoors on a hot day or inside a car on a warm day. (Just make sure the balloon is not in direct sunlight or near a hot lamp, as this can deflate the balloon by letting the gas escape.) Does the balloon change size when put in a hot place? If so, how? - Extra: In this activity you used air from your lungs but other gases might behave differently. You could try this activity again but this time fill the balloons with helium. How does using helium affect how the balloon changes size when placed in a freezer? Observations and results Did balloon number 1, which was placed in the freezer, shrink a little compared with balloon number 2, which stayed at room temperature? You should have seen that when you put the balloon in the freezer, the distance between the lines decreased a little, from about two and a half inches to two and a quarter (or by a quarter inch, about 10 percent). The balloon shrank! The distance between the lines on the balloon kept at room temperature should have pretty much stayed the same (or decreased very slightly), meaning that the balloon shouldn’t have really changed size. The frozen balloon shrank because the average kinetic energy of the gas molecules in a balloon decreases when the temperature decreases. This makes the molecules move more slowly and have less frequent and weaker collisions with the inside wall of the balloon, which causes the balloon to shrink a little. But if you let the frozen balloon warm up, you would find that it gets bigger again, as big as the balloon that you left at room temperature the whole time. This is because the average kinetic energy would increase due to the warmer temperature, making the molecules move faster and hit the inside of the balloon harder and more frequently again. More to explore Looking for a Gas, from Rader’s Chem4Kids.com Gases around Us, from BBC Balloon Morphing: How Gases Contract and Expand, from Science Buddies Racing to Win That Checkered Flag: How Do Gases Help?, from Science Buddies This activity brought to you in partnership with Science Buddies
http://www.scientificamerican.com/article/size-changing-science-how-gases-contract-and-expand/?mobileFormat=true
4.25
What will be the fate of our moon? Will it remain in a stable orbit, crash back into Earth or drift off into space? The Moon is gradually receding from the Earth, at a rate of about 4 cm per year. This is caused by a transfer of Earth's rotational momentum to the Moon's orbital momentum as tidal friction slows the Earth's rotation. That increasing distance means a longer orbital period, or month, as well. To picture what is happening, imagine yourself riding a bicycle on a track built around a Merry-go-Round. You are riding in the same direction that it is turning. If you have a lasso and rope one of the horses, you would gain speed and the Merry-Go-Round would lose some. In this analogy, you and your bike represent the Moon, the Merry-Go-Round is the rotating Earth, and your lasso is gravity. In orbital mechanics, a gain in speed results in a higher orbit. The slowing rotation of the Earth results in a longer day as well as a longer month. Once the length of a day equals the length of a month, the tidal friction mechanism would cease. (ie. Once your speed on the track matches the speed of the horses, you can't gain any more speed with your lasso trick.) That's been projected to happen once the day and month both equal about 47 (current) days, billions of years in the future. If the Earth and Moon still exist, the distance will have increased to about 135% of its current value. Paul Walorski, B.A., Part-time Physics Instructor 'All of us, are truly and literally a little bit of stardust.'
http://www.physlink.com/Education/AskExperts/ae429.cfm