score
float64 4
5.34
| text
stringlengths 256
572k
| url
stringlengths 15
373
|
---|---|---|
4.53125 | Sediment transport is the movement of solid particles (sediment), typically due to a combination of gravity acting on the sediment, and/or the movement of the fluid in which the sediment is entrained. Sediment transport occurs in natural systems where the particles are clastic rocks (sand, gravel, boulders, etc.), mud, or clay; the fluid is air, water, or ice; and the force of gravity acts to move the particles along the sloping surface on which they are resting. Sediment transport due to fluid motion occurs in rivers, oceans, lakes, seas, and other bodies of water due to currents and tides. Transport is also caused by glaciers as they flow, and on terrestrial surfaces under the influence of wind. Sediment transport due only to gravity can occur on sloping surfaces in general, including hillslopes, scarps, cliffs, and the continental shelf—continental slope boundary.
Sediment transport is important in the fields of sedimentary geology, geomorphology, civil engineering and environmental engineering (see applications, below). Knowledge of sediment transport is most often used to determine whether erosion or deposition will occur, the magnitude of this erosion or deposition, and the time and distance over which it will occur.
- 1 Mechanisms
- 2 Applications
- 3 Initiation of motion
- 4 Modes of entrainment
- 5 Hjulström-Sundborg Diagram
- 6 Transport rate
- 7 See also
- 8 References
- 9 External links
Aeolian or eolian (depending on the parsing of æ) is the term for sediment transport by wind. This process results in the formation of ripples and sand dunes. Typically, the size of the transported sediment is fine sand (<1 mm) and smaller, because air is a fluid with low density and viscosity, and can therefore not exert very much shear on its bed.
Aeolian sediment transport is common on beaches and in the arid regions of the world, because it is in these environments that vegetation does not prevent the presence and motion of fields of sand.
Wind-blown very fine-grained dust is capable of entering the upper atmosphere and moving across the globe. Dust from the Sahara deposits on the Canary Islands and islands in the Caribbean, and dust from the Gobi desert has deposited on the western United States. This sediment is important to the soil budget and ecology of several islands.
In geology, physical geography, and sediment transport, fluvial processes relate to flowing water in natural systems. This encompasses rivers, streams, periglacial flows, flash floods and glacial lake outburst floods. Sediment moved by water can be larger than sediment moved by air because water has both a higher density and viscosity. In typical rivers the largest carried sediment is of sand and gravel size, but larger floods can carry cobbles and even boulders.
Fluvial sediment transport can result in the formation of ripples and dunes, in fractal-shaped patterns of erosion, in complex patterns of natural river systems, and in the development of floodplains.
Coastal sediment transport takes place in near-shore environments due to the motions of waves and currents. At the mouths of rivers, coastal sediment and fluvial sediment transport processes mesh to create river deltas.
As glaciers move over their beds, they entrain and move material of all sizes. Glaciers can carry the largest sediment, and areas of glacial deposition often contain a large number of glacial erratics, many of which are several metres in diameter. Glaciers also pulverize rock into "glacial flour", which is so fine that it is often carried away by winds to create loess deposits thousands of kilometres afield. Sediment entrained in glaciers often moves approximately along the glacial flowlines, causing it to appear at the surface in the ablation zone.
In hillslope sediment transport, a variety of processes move regolith downslope. These include:
- Soil creep
- Tree throw
- Movement of soil by burrowing animals
- Slumping and landsliding of the hillslope
These processes generally combine to give the hillslope a profile that looks like a solution to the diffusion equation, where the diffusivity is a parameter that relates to the ease of sediment transport on the particular hillslope. For this reason, the tops of hills generally have a parabolic concave-up profile, which grades into a convex-up profile around valleys.
As hillslopes steepen, however, they become more prone to episodic landslides and other mass wasting events. Therefore, hillslope processes are better described by a nonlinear diffusion equation in which classic diffusion dominates for shallow slopes and erosion rates go to infinity as the hillslope reaches a critical angle of repose.
Large masses of material are moved in debris flows, hyperconcentrated mixtures of mud, clasts that range up to boulder-size, and water. Debris flows move as granular flows down steep mountain valleys and washes. Because they transport sediment as a granular mixture, their transport mechanisms and capacities scale differently from those of fluvial systems.
Sediment transport is applied to solve many environmental, geotechnical, and geological problems. Measuring or quantifying sediment transport or erosion is therefore important for coastal engineering. Several sediment erosion devices have been designed in order to quantitfy sediment erosion (e.g., Particle Erosion Simulator (PES)). One such device, also referred to as the BEAST (Benthic Environmental Assessment Sediment Tool) has been calibrated in order to quantify rates of sediment erosion.
Movement of sediment is important in providing habitat for fish and other organisms in rivers. Therefore, managers of highly regulated rivers, which are often sediment-starved due to dams, are often advised to stage short floods to refresh the bed material and rebuild bars. This is also important, for example, in the Grand Canyon of the Colorado River, to rebuild shoreline habitats also used as campsites.
Sediment discharge into a reservoir formed by a dam forms a reservoir delta. This delta will fill the basin, and eventually, either the reservoir will need to be dredged or the dam will need to be removed. Knowledge of sediment transport can be used to properly plan to extend the life of a dam.
Geologists can use inverse solutions of transport relationships to understand flow depth, velocity, and direction, from sedimentary rocks and young deposits of alluvial materials.
Flow in culverts, over dams, and around bridge piers can cause erosion of the bed. This erosion can damage the environment and expose or unsettle the foundations of the structure. Therefore, good knowledge of the mechanics of sediment transport in a built environment are important for civil and hydraulic engineers.
When suspended sediment transport is increased due to human activities, causing environmental problems including the filling of channels, it is called siltation after the grain-size fraction dominating the process.
Initiation of motion
For a fluid to begin transporting sediment that is currently at rest on a surface, the boundary (or bed) shear stress exerted by the fluid must exceed the critical shear stress for the initiation of motion of grains at the bed. This basic criterion for the initiation of motion can be written as:
This is typically represented by a comparison between a dimensionless shear stress ()and a dimensionless critical shear stress (). The nondimensionalization is in order to compare the driving forces of particle motion (shear stress) to the resisting forces that would make it stationary (particle density and size). This dimensionless shear stress, , is called the Shields parameter and is defined as:
And the new equation to solve becomes:
The equations included here describe sediment transport for clastic, or granular sediment. They do not work for clays and muds because these types of floccular sediments do not fit the geometric simplifications in these equations, and also interact thorough electrostatic forces. The equations were also designed for fluvial sediment transport of particles carried along in a liquid flow, such as that in a river, canal, or other open channel.
Only one size of particle is considered in this equation. However, river beds are often formed by a mixture of sediment of various sizes. In case of partial motion where only a part of the sediment mixture moves, the river bed becomes enriched in large gravel as the smaller sediments are washed away. The smaller sediments present under this layer of large gravel have a lower possibility of movement and total sediment transport decreases. This is called armouring effect. Other forms of armouring of sediment or decreasing rates of sediment erosion can be caused by carpets of microbial mats, under conditions of high organic loading.
Critical shear stress
The Shields diagram empirically shows how the dimensionless critical shear stress (i.e. the dimensionless shear stress required for the initiation of motion) is a function of a particular form of the particle Reynolds number, or Reynolds number related to the particle. This allows us to rewrite the criterion for the initiation of motion in terms of only needing to solve for a specific version of the particle Reynolds number, which we call .
This equation can then be solved by using the empirically derived Shields curve to find as a function of a specific form of the particle Reynolds number called the boundary Reynolds number. The mathematical solution of the equation was given by Dey.
Particle Reynolds Number
In general, a particle Reynolds Number has the form:
Where is a characteristic particle velocity, is the grain diameter (a characteristic particle size), and is the kinematic viscosity, which is given by the dynamic viscosity, , divided by the fluid density, .
The specific particle Reynolds number of interest is called the boundary Reynolds number, and it is formed by replacing the velocity term in the Particle Reynolds number by the shear velocity, , which is a way of rewriting shear stress in terms of velocity.
where is the bed shear stress (described below), and is the von Kármán constant, where
The particle Reynolds number is therefore given by:
Bed shear stress
The boundary Reynolds number can be used with the Shields diagram to empirically solve the equation
which solves the right-hand side of the equation
In order to solve the left-hand side, expanded as
we must find the bed shear stress, . There are several ways to solve for the bed shear stress. First, we develop the simplest approach, in which the flow is assumed to be steady and uniform and reach-averaged depth and slope are used. Due to the difficulty of measuring shear stress in situ, this method is also one of the most-commonly used. This method is known as the depth-slope product.
For a river undergoing approximately steady, uniform equilibrium flow, of approximately constant depth h and slope θ over the reach of interest, and whose width is much greater than its depth, the bed shear stress is given by some momentum considerations stating that the gravity force component in the flow direction equals exactly the friction force. For a wide channel, it yields:
For shallow slopes, which are found in almost all natural lowland streams, the small-angle formula shows that is approximately equal to , which is given by , the slope. Rewritten with this:
Shear velocity, velocity, and friction factor
For the steady case, by extrapolating the depth-slope product and the equation for shear velocity:
We can see that the depth-slope product can be rewritten as:
is related to the mean flow velocity, , through the generalized Darcy-Weisbach friction factor, , which is equal to the Darcy-Weisbach friction factor divided by 8 (for mathematical convenience). Inserting this friction factor,
For all flows that cannot be simplified as a single-slope infinite channel (as in the depth-slope product, above), the bed shear stress can be locally found by applying the Saint-Vennant equations for continuity, which consider accelerations within the flow.
The criterion for the initiation of motion, established earlier, states that
In this equation,
- , and therefore
- is a function of boundary Reynolds number, a specific type of particle Reynolds number.
For a particular particle Reynolds number, will be an emprical constant given by the Shields Curve or by another set of empirical data (depending on whether or not the grain size is uniform).
Therefore, the final equation that we seek to solve is:
We make several assumptions to provide an example that will allow us to bring the above form of the equation into a solved form.
First, we assume that the a good approximation of reach-averaged shear stress is given by the depth-slope product. We can then rewrite the equation as
Moving and re-combining the terms, we obtain:
where R is the submerged specific gravity of the sediment.
We then make our second assumption, which is that the particle Reynolds number is high. This is typically applicable to particles of gravel-size or larger in a stream, and means that the critical shear stress is a constant. The Shields curve shows that for a bed with a uniform grain size,
Later researchers have shown that this value is closer to
for more uniformly sorted beds. Therefore, we will simply insert
and insert both values at the end.
The equation now reads:
This final expression shows that the product of the channel depth and slope is equal to the Shield's criterion times the submerged specific gravity of the particles times the particle diameter.
For a typical situation, such as quartz-rich sediment in water , the submerged specific gravity is equal to 1.65.
Plugging this into the equation above,
For the Shield's criterion of . 0.06 * 1.65 = 0.099, which is well within standard margins of error of 0.1. Therefore, for a uniform bed,
For these situations, the product of the depth and slope of the flow should be 10% of the diameter of the median grain diameter.
The mixed-grain-size bed value is , which is supported by more recent research as being more broadly applicable because most natural streams have mixed grain sizes. Using this value, and changing D to D_50 ("50" for the 50th percentile, or the median grain size, as we are now looking at a mixed-grain-size bed), the equation becomes:
Which means that the depth times the slope should be about 5% of the median grain diameter in the case of a mixed-grain-size bed.
Modes of entrainment
The sediments entrained in a flow can be transported along the bed as bed load in the form of sliding and rolling grains, or in suspension as suspended load advected by the main flow. Some sediment materials may also come from the upstream reaches and be carried downstream in the form of wash load.
The location in the flow in which a particle is entrained is determined by the Rouse number, which is determined by the density ρs and diameter d of the sediment particle, and the density ρ and kinematic viscosity ν of the fluid, determine in which part of the flow the sediment particle will be carried.
Here, the Rouse number is given by P. The term in the numerator is the (downwards) sediment the sediment settling velocity ws, which is discussed below. The upwards velocity on the grain is given as a product of the von Kármán constant, κ = 0.4, and the shear velocity, u∗.
|Mode of Transport||Rouse Number|
|Initiation of motion||>7.5|
|Bed load||>2.5, <7.5|
|Suspended load: 50% Suspended||>1.2, <2.5|
|Suspended load: 100% Suspended||>0.8, <1.2|
The settling velocity (also called the "fall velocity" or "terminal velocity") is a function of the particle Reynolds number. Generally, for small particles (laminar approximation), it can be calculated with Stokes' Law. For larger particles (turbulent particle Reynolds numbers), fall velocity is calculated with the turbulent drag law. Dietrich (1982) compiled a large amount of published data to which he empirically fit settling velocity curves. Ferguson and Church (2006) analytically combined the expressions for Stokes flow and a turbulent drag law into a single equation that works for all sizes of sediment, and successfully tested it against the data of Dietrich. Their equation is
In this equation ws is the sediment settling velocity, g is acceleration due to gravity, and D is mean sediment diameter. is the kinematic viscosity of water, which is approximately 1.0 x 10−6 m2/s for water at 20 °C.
and are constants related to the shape and smoothness of the grains.
|Constant||Smooth Spheres||Natural Grains: Sieve Diameters||Natural Grains: Nominal Diameters||Limit for Ultra-Angular Grains|
The expression for fall velocity can be simplified so that it can be solved only in terms of D. We use the sieve diameters for natural grains, , and values given above for and . From these parameters, the fall velocity is given by the expression:
In 1935, Filip Hjulström created the Hjulström curve, a graph which shows the relationship between the size of sediment and the velocity required to erode (lift it), transport it, or deposit it. The graph is logarithmic.
Åke Sundborg later modified the Hjulström curve to show separate curves for the movement threshold corresponding to several water depths, as is necessary if the flow velocity rather than the boundary shear stress (as in the Shields diagram) is used for the flow strength.
This curve has no more than a historical value nowadays, although its simplicity is still attractive. Among the drawbacks of this curve are that it does not take the water depth into account and more importantly, that it does not show that sedimentation is caused by flow velocity deceleration and erosion is caused by flow acceleration. The dimensionless Shields diagram is now unanimously accepted for initiation of sediment motion in rivers. Much work was done on river sediment transport formulae in the second half of the 20th century and that work should be used preferably to Hjulström's curve, e.g. Meyer-Peter & Müller (1948), Engelund-Hansen (1967), Lefort (1991), Belleudy (2012).
Formulas to calculate sediment transport rate exist for sediment moving in several different parts of the flow. These formulas are often segregated into bed load, suspended load, and wash load. They may sometimes also be segregated into bed material load and wash load.
Bed load moves by rolling, sliding, and hopping (or saltating) over the bed, and moves at a small fraction of the fluid flow velocity. Bed load is generally thought to constitute 5-10% of the total sediment load in a stream, making it less important in terms of mass balance. However, the bed material load (the bed load plus the portion of the suspended load which comprises material derived from the bed) is often dominated by bed load, especially in gravel-bed rivers. This bed material load is the only part of the sediment load that actively interacts with the bed. As the bed load is an important component of that, it plays a major role in controlling the morphology of the channel.
Bed load transport rates are usually expressed as being related to excess dimensionless shear stress raised to some power. Excess dimensionless shear stress is a nondimensional measure of bed shear stress about the threshold for motion.
Bed load transport rates may also be given by a ratio of bed shear stress to critical shear stress, which is equivalent in both the dimensional and nondimensional cases. This ratio is called the "transport stage" and is an important in that it shows bed shear stress as a multiple of the value of the criterion for the initiation of motion.
When used for sediment transport formulae, this ratio is typically raised to a power.
The majority of the published relations for bedload transport are given in dry sediment weight per unit channel width, ("breadth"):
Due to the difficulty of estimating bed load transport rates, these equations are typically only suitable for the situations for which they were designed.
Notable bed load transport formulae
Meyer-Peter Müller and derivatives
The transport formula of Meyer-Peter and Müller, originally developed in 1948, was designed for well-sorted fine gravel at a transport stage of about 8. The formula uses the above nondimensionalization for shear stress,
Their formula reads:
Their experimentally determined value for is 0.047, and is the third commonly used value for this (in addition to Parker's 0.03 and Shields' 0.06).
Wilcock and Crowe
In 2003, Peter Wilcock and Joanna Crowe (now Joanna Curran) published a sediment transport formula that works with multiple grain sizes across the sand and gravel range. Their formula works with surface grain size distributions, as opposed to older models which use subsurface grain size distributions (and thereby implicitly infer a surface grain sorting).
Their expression is more complicated than the basic sediment transport rules (such as that of Meyer-Peter and Müller) because it takes into account multiple grain sizes: this requires consideration of reference shear stresses for each grain size, the fraction of the total sediment supply that falls into each grain size class, and a "hiding function".
The "hiding function" takes into account the fact that, while small grains are inherently more mobile than large grains, on a mixed-grain-size bed, they may be trapped in deep pockets between large grains. Likewise, a large grain on a bed of small particles will be stuck in a much smaller pocket than if it were on a bed of grains of the same size. In gravel-bed rivers, this can cause "equal mobility", in which small grains can move just as easily as large ones. As sand is added to the system, it moves away from the "equal mobility" portion of the hiding function to one in which grain size again matters.
Their model is based on the transport stage, or ratio of bed shear stress to critical shear stress for the initiation of grain motion. Because their formula works with several grain sizes simultaneously, they define the critical shear stress for each grain size class, , to be equal to a "reference shear stress", .
They express their equations in terms of a dimensionless transport parameter, (with the "" indicating nondimensionality and the "" indicating that it is a function of grain size):
is the volumetric bed load transport rate of size class per unit channel width . is the proportion of size class that is present on the bed.
They came up with two equations, depending on the transport stage, . For :
and for :
This equation asymptotically reaches a constant value of as becomes large.
Wilcock and Kenworthy
In 2002, Peter Wilcock and Kenworthy T.A. , following Peter Wilcock (1998), published a sediment bed-load transport formula that works with only two sediments fractions, i.e. sand and gravel fractions. Peter Wilcock and Kenworthy T.A. in their article recognized that a mixed-sized sediment bed-load transport model using only two fractions offers practical advantages in terms of both computational and conceptual modeling by taking into account the nonlinear effects of sand presence in gravel beds on bed-load transport rate of both fractions. In fact, in the two-fraction bed load formula appears a new ingredient with respect to that of Meyer-Peter and Müller that is the proportion of fraction on the bed surface where the subscript represents either the sand (s) or gravel (g) fraction. The proportion , as a function of sand content , physically represents the relative influence of the mechanisms controlling sand and gravel transport, associated with the change from a clast-supported to matrix-supported gravel bed. Moreover, since spans between 0 and 1, phenomena that vary with include the relative size effects producing ‘‘hiding’’ of fine grains and ‘‘exposure’’ of coarse grains. The ‘‘hiding’’ effect takes into account the fact that, while small grains are inherently more mobile than large grains, on a mixed-grain-size bed, they may be trapped in deep pockets between large grains. Likewise, a large grain on a bed of small particles will be stuck in a much smaller pocket than if it were on a bed of grains of the same size, which the Meyer-Peter and Müller formula refers to. In gravel-bed rivers, this can cause ‘‘equal mobility", in which small grains can move just as easily as large ones. As sand is added to the system, it moves away from the ‘‘equal mobility’’ portion of the hiding function to one in which grain size again matters.
Their model is based on the transport stage,i.e. , or ratio of bed shear stress to critical shear stress for the initiation of grain motion. Because their formula works with only two fractions simultaneously, they define the critical shear stress for each of the two grain size classes, , where represents either the sand (s) or gravel (g) fraction . The critical shear stress that represents the incipient motion for each of the two fractions is consistent with established values in the limit of pure sand and gravel beds and shows a sharp change with increasing sand content over the transition from a clast- to matrix-supported bed.
They express their equations in terms of a dimensionless transport parameter, (with the "" indicating nondimensionality and the ‘‘’’ indicating that it is a function of grain size):
is the volumetric bed load transport rate of size class per unit channel width . is the proportion of size class that is present on the bed.
They came up with two equations, depending on the transport stage, . For :
and for :
This equation asymptotically reaches a constant value of as becomes large and the symbols have the following values:
In order to apply the above formulation, it is necessary to specify the characteristic grain sizes for the sand portion and for the gravel portion of the surface layer, the fractions and of sand and gravel, respectively in the surface layer, the submerged specific gravity of the sediment R and shear velocity associated with skin friction .
Kuhnle et al.
For the case in which sand fraction is transported by the current over and through an immobile gravel bed, Kuhnle et al.(2013), following the theoretical analysis done by Pellachini (2011), provides a new relationship for the bed load transport of the sand fraction when gravel particles remain at rest. It is worth mentioning that Kuhnle et al.(2013) applied the Wilcock and Kenworthy (2002) formula to their experimental data and found out that predicted bed load rates of sand fraction were about 10 times greater than measured and approached 1 as the sand elevation became near the top of the gravel layer. They, also, hypothesized that the mismatch between predicted and measured sand bed load rates is due to the fact that the bed shear stress used for the Wilcock and Kenworthy (2002) formula was larger than that available for transport within the gravel bed because of the sheltering effect of the gravel particles. To overcome this mismatch, following Pellachini (2011), they assumed that the variability of the bed shear stress available for the sand to be transported by the current would be some function of the so-called "Roughness Geometry Function" (RGF), which represents the gravel bed elevations distribution. Therefore, the sand bed load formula follows as:
the subscript refers to the sand fraction, s represents the ratio where is the sand fraction density, is the RGF as a function of the sand level within the gravel bed, is the bed shear stress available for sand transport and is the critical shear stress for incipient motion of the sand fraction, which was calculated graphically using the updated Shields-type relation of Miller et al.(1977).
Suspended load is carried in the lower to middle parts of the flow, and moves at a large fraction of the mean flow velocity in the stream.
A common characterization of suspended sediment concentration in a flow is given by the Rouse Profile. This characterization works for the situation in which sediment concentration at one particular elevation above the bed can be quantified. It is given by the expression:
Here, is the elevation above the bed, is the concentration of suspended sediment at that elevation, is the flow depth, is the Rouse number, and relates the eddy viscosity for momentum to the eddy diffusivity for sediment, which is approximately equal to one.
Experimental work has shown that ranges from 0.93 to 1.10 for sands and silts.
The Rouse profile characterizes sediment concentrations because the Rouse number includes both turbulent mixing and settling under the weight of the particles. Turbulent mixing results in the net motion of particles from regions of high concentrations to low concentrations. Because particles settle downward, for all cases where the particles are not neutrally buoyant or sufficiently light that this settling velocity is negligible, there is a net negative concentration gradient as one goes upward in the flow. The Rouse Profile therefore gives the concentration profile that provides a balance between turbulent mixing (net upwards) of sediment and the downwards settling velocity of each particle.
Bed material load
Bed material load comprises the bed load and the portion of the suspended load that is sourced from the bed.
Three common bed material transport relations are the "Ackers-White", "Engelund-Hansen", "Yang" formulae. The first is for sand to granule-size gravel, and the second and third are for sand though Yang later expanded his formula to include fine gravel. That all of these formulae cover the sand-size range and two of them are exclusively for sand is that the sediment in sand-bed rivers is commonly moved simultaneously as bed and suspended load.
The bed material load formula of Engelund and Hansen is the only one to not include some kind of critical value for the initiation of sediment transport. It reads:
where is the Einstein nondimensionalization for bed shear stress, is a friction factor, and is the Shields stress. The Engelund-Hansen formula is one of the few sediment transport formulae in which a threshold "critical shear stress" is absent.
Wash load is carried within the water column as part of the flow, and therefore moves with the mean velocity of main stream. Wash load concentrations are approximately uniform in the water column. This is described by the endmember case in which the Rouse number is equal to 0 (i.e. the settling velocity is far less than the turbulent mixing velocity), which leads to a prediction of a perfectly uniform vertical concentration profile of material.
Some authors have attempted formulations for the total sediment load carried in water. These formulas are designed largely for sand, as (depending on flow conditions) sand often can be carried as both bed load and suspended load in the same stream or shoreface.
- Anderson, R (1990). "Eolian ripples as examples of self-organization in geomorphological systems". Earth-Science Reviews 29: 77. doi:10.1016/0012-8252(0)90029-U.
- Kocurek, Gary; Ewing, Ryan C. (2005). "Aeolian dune field self-organization – implications for the formation of simple versus complex dune-field patterns". Geomorphology 72: 94. Bibcode:2005Geomo..72...94K. doi:10.1016/j.geomorph.2005.05.005.
- Goudie, A; Middleton, N.J. (2001). "Saharan dust storms: nature and consequences". Earth-Science Reviews 56: 179. Bibcode:2001ESRv...56..179G. doi:10.1016/S0012-8252(01)00067-8.
- Ashton, Andrew; Murray, A. Brad; Arnault, Olivier (2001). "Formation of coastline features by large-scale instabilities induced by high-angle waves". Nature 414 (6861): 296–300. Bibcode:2001Natur.414..296A. doi:10.1038/35104541. PMID 11713526.
- Roering, Joshua J.; Kirchner, James W.; Dietrich, William E. (1999). "Evidence for nonlinear, diffusive sediment transport on hillslopes and implications for landscape morphology". Water Resources Research 35 (3): 853. Bibcode:1999WRR....35..853R. doi:10.1029/1998WR900090.
- Grant, J., Walker, T.R., Hill P.S., Lintern, D.G. (2013) BEAST-A portable device for quantification of erosion in intact sediment cores. Methods in Oceanography. DOI: 10.1016/j.mio.2013.03.001
- Shields, A. (1936) Anwendung der Ähnlichkeitsmechanik und der Turbulenzforschung auf die Geschiebebewegung; In Mitteilungen der Preussischen Versuchsanstalt für Wasserbau und Schiffbau, Heft 26 (Online; PDF; 3,8 MB)
- Saniya Sharmeen and Garry R. Willgoose1, The interaction between armouring and particle weathering for eroding landscapes, Earth surface Processes and Landforms 31, 1195–1210 (2006)
- Walker, T.R., Grant, J. (2009) Quantifying erosion rates and stability of bottom sediments at mussel aquaculture sites in Prince Edward Island, Canada. Journal of Marine Systems. 75: 46-55. doi:10.1016/j.jmarsys.2008.07.009
- Dey S. (1999) Sediment threshold. Applied Mathematical Modelling, Elsevier, Vol. 23, No. 5, 399-417.
- Hubert Chanson (2004). The Hydraulics of Open Channel Flow: An Introduction. Butterworth-Heinemann, 2nd edition, Oxford, UK, 630 pages. ISBN 978-0-7506-5978-9.
- Whipple, Kelin (2004). "Hydraulic Roughness" (PDF). 12.163: Surface processes and landscape evolution. MIT OCW. Retrieved 2009-03-27.
- Whipple, Kelin (September 2004). "IV. Essentials of Sediment Transport" (PDF). 12.163/12.463 Surface Processes and Landscape Evolution: Course Notes. MIT OpenCourseWare. Retrieved 2009-10-11.
- Moore, Andrew. "Lecture 20—Some Loose Ends" (PDF). Lecture Notes: Fluvial Sediment Transport. Kent State. Retrieved 23 December 2009.
- Dietrich, W. E. (1982). "Settling Velocity of Natural Particles" (PDF). Water Resources Research 18 (6): 1615–1626. Bibcode:1982WRR....18.1615D. doi:10.1029/WR018i006p01615.
- Ferguson, R. I., and M. Church (2006), A Simple Universal Equation for Grain Settling Velocity, Journal of Sedimentary Research, 74(6) 933-937, doi:10.1306/051204740933
- The long profile – changing processes: types of erosion, transportation and deposition, types of load; the Hjulstrom curve. coolgeography.co.uk. Last accessed 26 Dec 2011.
- Special Topics: An Introduction to Fluid Motions, Sediment Transport, and Current-generated Sedimentary Structures; As taught in: Fall 2006. Massachusetts Institute of Technology. 2006. Last accessed 26 Dec 2011.
- Meyer-Peter, E; Müller, R. (1948). Formulas for bed-load transport. Proceedings of the 2nd Meeting of the International Association for Hydraulic Structures Research. pp. 39–64.
- Fernandez-Luque, R; van Beek, R (1976). "Erosion and transport of bedload sediment". Jour. Hyd. Research 14 (2).
- Cheng, Nian-Sheng (2002). "Exponential Formula for Bedload Transport". Journal of Hydraulic Engineering 128 (10): 942. doi:10.1061/(ASCE)0733-9429(2002)128:10(942).
- Wilson, K. C. (1966). "Bed-load transport at high shear stress". J. Hydraul. Div. (ASCE) 92 (6): 49–59.
- Wiberg, Patricia L.; Dungan Smith, J. (1989). "Model for Calculating Bed Load Transport of Sediment". Journal of Hydraulic Engineering 115: 101. doi:10.1061/(ASCE)0733-9429(1989)115:1(101).
- Wilcock, Peter R.; Crowe, Joanna C. (2003). "Surface-based Transport Model for Mixed-Size Sediment". Journal of Hydraulic Engineering 129 (2): 120. doi:10.1061/(ASCE)0733-9429(2003)129:2(120).
- Parker, G.; Klingeman, P. C.; McLean, D. G. (1982). "Bedload and Size Distribution in Paved Gravel-Bed Streams". Journal of the Hydraulics Division (ASCE) 108 (4): 544–571.
- Wilcock, P. R. (1998). "Two-fraction model of initial sediment motion in gravel-bed rivers". Science (280): 410–412. Bibcode:1998Sci...280..410W. doi:10.1126/science.280.5362.410.
- Wilcock, Peter R.; Kenworthy, T. (2002). "A two-fraction model for the transport of sand/gravel mixtures". Water Resour. Res. 38 (10): 1194. Bibcode:2002WRR....38.1194W. doi:10.1029/2001WR000684.
- Kuhnle, R. A.; Wren, D. G.; Langendoen, E. J.; Rigby, J. R. (2013). "Sand Transport over an Immobile Gravel Substrate". Journal of Hydraulic Engineering 139 (2). doi:10.1061/(ASCE)HY.1943-7900.0000615.
- Pellachini, Corrado (2011). Modelling fine sediment transport over an immobile gravel bed. Trento: Unitn-eprints.
- Nikora, V; Goring, D; McEwan, I; Griffiths, G (2001). "Spatially averaged open-channel flow over rough bed". J. Hydraul. Eng. 127 (2). doi:10.1061/(ASCE)0733-9429(2001)127:2(123).
- Miller, M.C.; McCave, I.N.; Komar, P.D. (1977). "Threshold of sediment motion under unidirectional currents.". Sedimentology 24 (4): 507–527. Bibcode:1977Sedim..24..507M. doi:10.1111/j.1365-3091.1977.tb00136.x.
- Harris, Courtney K. (March 18, 2003). "Lecture 9: Suspended Sediment Transport II" (PDF). Sediment transport processes in coastal environments. Virginia Institute of Marine Science. Retrieved 23 December 2009.
- Moore, Andrew. "Lecture 21—Suspended Sediment Transport" (PDF). Lecture Notes: Fluvial Sediment Transport. Kent State. Retrieved 25 December 2009.
- Ackers, P.; White, W.R. (1973). "Sediment Transport: New Approach and Analysis". Journal of the Hydraulics Division (ASCE) 99 (11): 2041–2060.
- Ariffin, J.; A.A. Ghani; N.A. Zakaira; A.H. Yahya (14–16 October 2002). "Evaluation of equations on total bed material load" (PDF). International Conference on Urban Hydrology for the 21st Century (Kuala Lumpur).
- Yang, C (1979). "Unit stream power equations for total load". Journal of Hydrology 40: 123. Bibcode:1979JHyd...40..123Y. doi:10.1016/0022-1694(79)90092-1.
- Bailard, James A. (1981). "An Energetics Total Load Sediment Transport Model For a Plane Sloping Beach". Journal of Geophysical Research 86: 10938. Bibcode:1981JGR....8610938B. doi:10.1029/JC086iC11p10938.
- Liu, Z. (2001), Sediment Transport.
- Moore, A. Fluvial sediment transport lecture notes, Kent State.
- Wilcock, P. Sediment Transport Seminar, January 26–28, 2004, University of California at Berkeley
- Southard, J. B. (2007), Sediment Transport and Sedimentary Structures | https://en.wikipedia.org/wiki/Transportation_(sediment) |
4.375 | Arbovirus testing is used to determine whether a person with symptoms and a recent history of potential exposure to a specific arbovirus has been infected. Testing can help diagnose the cause of meningitis or encephalitis, distinguish an arbovirus infection from other conditions causing similar symptoms, such as bacterial meningitis, and can help guide treatment. Testing may be performed on blood to detect antibodies to the viruses and/or may be performed on a sample of cerebrospinal fluid (CSF) to determine if an infection is present in the central nervous system.
Typically, the individual test ordered is specific for a particular arbovirus, such as West Nile Virus (WNV) or dengue fever, depending on the person's symptoms and likely exposure. Sometimes, a panel of tests may be used to determine which arbovirus is causing the infection.
Most often, testing involves the measurement of specific arbovirus antibodies produced in response to an infection. In certain cases, testing may involve direct detection by testing for the genetic material (nucleic acid) of the virus.
Antibody Tests Antibody testing is primarily used to help diagnose a current or recent infection. There are two classes of arbovirus antibodies produced in response to infection: IgM and IgG. IgM antibodies are produced first and are present within a week or two after infection. Levels in the blood rise for a few weeks, then taper off. After a few months, IgM antibodies fall below detectable levels. IgG antibodies are produced after IgM antibodies. Typically, the level rises with an acute infection, stabilizes, and then persists long-term.
IgM antibody testing is the primary test performed on the blood or cerebrospinal fluid of symptomatic people. IgG tests may be ordered along with IgM testing to help diagnose a recent or previous arbovirus infection. Sometimes testing is done by collecting two samples 2 to 4 weeks apart (acute and convalescent samples) and measuring the IgG level (titer). This may help determine whether antibodies are from a recent or past infection.
Antibody tests may cross-react with viruses in the same family, so a second test that employs a different method, such as nucleic acid amplification test (NAAT) or a neutralization assay, is used to confirm positive results. These confirmatory methods are specialized tests that may be performed at a public health laboratory or the CDC. They must be done before a diagnosis is established and officially reported to the CDC.
Nucleic Acid Amplification Test A nucleic acid amplification test (NAAT) amplifies and measures the arbovirus's genetic material to detect the presence of the virus. This type of testing is done only at a few specialized laboratories. It can detect a current infection with the virus, often before antibodies to the virus are detectable, but there must be a certain amount of virus present in the sample in order to detect it. For most arboviruses, virus levels in humans are usually low and do not persist for very long.
Nucleic acid testing can also be used to screen for an arbovirus such as WNV in donated units of blood, tissue, or organs. It may be used to test the tissues of a person who has died (post mortem) to determine whether a specific arbovirus may have caused or contributed to their death.
Testing can also be performed on suspected host animals and mosquito pools to detect the presence and spread of an arbovirus in the community and region. This information can be used to help investigate outbreaks, identify and monitor infection sources, and to guide efforts to prevent the spread of the infection.
Antibody tests are primarily ordered when a person has signs and symptoms suggesting a current arbovirus infection, especially if the person lives in or has recently traveled to an area where a specific arbovirus is endemic.
In the U.S., an arbovirus infection may be suspected when symptoms arise during mid to late summer. In warmer areas, infections may occur year-round.
Antibody tests may be ordered within the first week or two of the onset of symptoms to detect an acute infection. An additional blood sample may be collected 2 to 4 weeks later to determine if the antibody level is rising. When an infection of the central nervous system is suspected, antibody testing may be performed on CSF as well as blood.
Nucleic acid amplification testing (NAAT) is not ordered as frequently as antibody testing but may sometimes be ordered when a person has symptoms of an arbovirus infection. NAAT testing is also now routinely used in the U.S. to screen units of donated blood for WNV and may be performed on the blood of tissue and organ donors prior to transplantation.
Results of arbovirus testing require careful interpretation, taking into consideration the individual's signs and symptoms as well as risk of exposure.
Antibody Tests Antibody tests may be reported as positive or negative, or may be reported as less than or greater than a certain titer. For example, if the established threshold is a titer of 1:10, then a result less than this is considered negative while a titer greater than this is considered positive.
If IgM or IgG antibody is detected in the CSF, it suggests that an arbovirus infection is present in the central nervous system. If a CSF antibody test is negative, then it suggests that there is no central nervous system involvement or the level of antibody is too low to detect.
If IgM and IgG arbovirus antibodies are detected in an initial blood sample, then it is likely that the person became infected with the arbovirus within the last few weeks. If the IgG is positive but the IgM is low or negative, then it is likely that the person had an arbovirus infection sometime in the past. If the arbovirus IgG antibody titer increases four-fold between an initial sample and one taken 2 to 4 weeks later, then it is likely that a person has had a recent infection.
If the tests are negative for IgM and/or IgG antibodies, the person may still have an arbovirus infection – it may just be that it is too soon after initial exposure to the virus and there has not been enough time to produce a detectable level of antibody. A negative result may also suggest that symptoms may be due to a different cause, such as bacterial meningitis.
The following table summarizes results that may be seen with antibody testing:
Low or negative or not tested
Four-fold increase in samples taken 2-4 weeks apart
Low or negative
Too soon after initial exposure for antibodies to develop
Symptoms due to another cause
A positive result on an initial test for IgM arbovirus antibody in blood or CSF is considered a presumptive positive since antibodies to viruses in the same family may cross-react. It suggests a diagnosis, but it is not definitive. A positive result on a second test using a different method (NAAT or neutralization assay) confirms the diagnosis.
Nucleic Acid Amplification Testing (NAAT) If a NAAT blood, CSF, or tissue test is positive for an arbovirus, then it is likely that that specific virus is present in the sample tested. A positive NAAT for arbovirus in an animal or mosquito pool sample indicates that the virus is present in the geographic location where the sample was collected.
A NAAT may be negative for an arbovirus if there is no virus present in the sample tested or if the virus is present in very low (undetectable) numbers. A negative test cannot be used to definitely rule out the presence of an arbovirus.
The presence of arbovirus antibodies may indicate an infection but cannot be used to predict the severity of an individual's symptoms or their prognosis.
Other tests, such as antigen tests for dengue fever, and viralcultures may be used in some instances. NAAT and viral cultures may be used in research settings and by the medical community at a national and international level to identify and study the strains of arboviruses causing infections. Different strains have been isolated and associated with regional epidemics.
This article was last reviewed on April 16, 2012. | This article was last modified on January 27, 2016.
The review date indicates when the article was last reviewed from beginning to end to ensure that it reflects the most current science. A review may not require any modifications to the article, so the two dates may not always agree.
The modified date indicates that one or more changes were made to the article. Such changes may or may not result from a full review of the article, so the two dates may not always agree. | https://labtestsonline.org/understanding/analytes/arbovirus-testing/tab/test |
4.21875 | Structural Biochemistry/Nucleic Acid/DNA
What is DNA? DNA is a long chain of linear polymers containing deoxyribose sugars and their covalently bonded bases known as nucleic acids. One of the major functions of the DNA is storage of the genetic information. In DNA a sequence of three bases, which is called a codon, is responsible for the encoding of a single amino acid. The amino acid is synthesized during the process of transcription. These nucleic acid polymers encode for the all of the materials an organism needs to live in the form of genes. Genes are small blocks of DNA that tell the cell which proteins it should create. The type of genes that a given cell receives depends entirely on the parent cells. Genes are passed on from generation to generation as a way of ensuring an organism's survival genetically.
DNA stands for deoxyribonucleic Acid. The prefix "deoxy" distinguishes DNA from its close relative RNA (ribonucleic acid). The prefix indicates that, unlike Ribose, Deoxyribose does not contain a hydroxyl group at the 2' carbon replacing it with a single Hydrogen atom. The absence of this Hydroxyl group is fundamental in determining the way in which DNA is able to condense itself within the nucleus of a cell.
DNA is a nucleic acid which is capable of duplicating itself via the enzyme known as DNA polymerase. Each of the four bases on DNA, Adenine (A), Cytosine (C), Guanine (G), and Thymine (T) is bonded covalently to a deoxyribose sugar. The four nucleotide units in DNA are called deoxyadenylate, deoxyguanylate, deoxycytidylate, and thymidylate. The nucleotide includes the nucleoside, a nitrogenous base bonded to a deoxyribose or ribose group. The four nucleosides in DNA are deoxyadenosine, deoxyguanosine, deoxycytidine, and thymide. By the joining one or more phosphate groups to a nucleoside through ester linkages, a nucleotide is formed.
The deoxyribose sugars form the structural backbone for DNA via a phosphodiester bond between the 3' carbon of one nucleotide and the 5' carbon of the next. When DNA is not self-replicating it exists in the cell as a double stranded helical molecule with the strands lined up anti-parallel to each other. That is to say if the orientation of one strand is 3' to 5' the other strand would be oriented 5' to 3'. The bases of each strand bind very specifically, A binds with T and C binds with G no other combination exists at least in DNA. The bases are bound to one another internally via hydrogen bonds with the phosphodiester bond backbone oriented to face outward. It is here that the missing 2' hydroxyl group plays an important role in DNA. It is the absence of this group that allows DNA to form its conventional double helix structure. RNA which does have a hydroxyl group at the 2' carbon is unable to obtain this same helical structure. The modern double helix structure of DNA was first proposed by Watson and Crick, and the functions of DNA were demonstrated in a series of experiments which will be discussed in the next few sections.
Why DNA? It is significant to note the reasons why DNA is the primary method through which all cells pass along genetic information. That is to say why has evolution favored a DNA world over an RNA world given that the two molecules are so similar structurally? These reasons involve chemical stability, energy needed to form and break chemical bonds, and the availability of enzymes to perform this task. The primary reason involves the relative stability of the two molecules. DNA is more chemically stable than RNA because it lacks the hydroxyl group on the 2' carbon. In RNA there are two possible OH groups that the molecule can form a phosphodiester bond between, which means that RNA is not forced into the same rigid structure as its deoxy counterpart. Additionally the deoxyribose sugar in DNA is much less reactive than the ribose sugar in RNA. Simply put C-H groups are significantly less reactive than C-OH (hydroxyl) groups. This difference also explains why RNA is not very stable in alkaline conditions, and DNA is. The base in alkaline condition does the same thing as the -OH group at the C2 position. Furthermore, double-strand DNA has relatively small grooves where damaging enzymes can't attach, making it more difficult for them to 'attack' the DNA. Double-stranded RNA, on the other hand, has much larger grooves, and therefore, it is more subject to being broken down by enzymes. The connection between the strands of double-stranded DNA is tighter than double-stranded RNA. In other words, it's much easier to unzip double-stranded RNA than it is to unzip double-stranded DNA. Overall, the breakdown and reform of RNA can be carried out faster and requires less energy than the breakdown and reform of DNA. It is essential to the organism's survival and well-being that its genetic material is encoded into something that is more stable and resistant to changes. In addition, the sequence of DNA and its physical conformation seems to play a part in DNA's selection as well. Another point that helps elucidate DNA's prevalence as the primary storage of genetic information is the availability of the enzyme that breaks down DNA. The body actively destroys foreign nucleases, which are enzymes that cleave DNA. This is only one of the many ways DNA is protected against damage. The body can actually recognize foreign DNA and destroy it, while leaving its own DNA intact.
Hyperchromic Effect Another unique feature of DNA in its double stranded form is the hyperchromic effect, which describes the decreasing absorbance of UV electromagnetic radiation of double helix strands as compared to the non-helical conformation of the molecule. The hydrogen bonding between complementary DNA strands as a result of sugar stacking in the helical conformation causes the aromatic rings to become increasingly stable and thus absorb less UV radiation. This ultimately decreases the amount of UV absorption by 40%. As the temperature is increased these hydrogen bonds dissolve and the helical structure begins to unwind. In this unwound form the aromatic rings are free to absorb much more UV radiation.
Properties of DNA
1. Consists of 2 strands (anti-parallel and complementary): DNA has two polynucleotide chains that twist around a helical axis in opposite direction.
2. It is made up of deoxyribose sugar, a phosphate backbone on the exterior, and nucleic acid bases in the interior.
3. Bases are perpendicular to the helix axis that separated by 3.4 Angstroms.
4. Strands are held together by hydrogen bonds an other various intermolecular forces that form a double helix. The base pairing involves 2 hydrogen bonds for A - T and 3 hydrogen bonds for C - G -see in images to the right
5. Backbone consists of alternating sugars and phosphates, where phosphodiester linkages form the covalent backbone of the DNA.The direction of DNA goes from 5' phosphate group to 3' hydroxide group. 6. Repeats every 10 bases
7. Weak forces stabilize DNA because of the hydrophobic effects and VanDerWaals.
8. DNA chain is 20 Angstroms wide (2 nm)
9. One nucleotide unit is 3.3 Angstroms long (0.33 nm)
DNA is made of two polynucleotide chains (strands) which run in opposite directions around the common axis. As a result, DNA has a double helical structure. Each polynucleotide chain of DNA consists of monomer units. A monomer unit consists of three main components that are a sugar, a phosphate, and a nitrogenous base. The sugar used in the DNA monomer unit is deoxyribose (it lacks an oxygen atom on the second Carbon in the furanose ring). There are also four pos
sible nitrogen containing bases which can be used in the monomer unit of the DNA. Those bases are adenine (A), guanine (G), cytosine (C), and thymine (T). Adenine and guanine are purine derivatives, while cytosine and thymine are pyrimidine derivatives. Polymeric chain forms as a result of joining nucleosides (the sugar which is covalently bonded to the nitrogen containing base) through the phosphodiester linkage. Polymeric chain is a single strand of the DNA molecule. Two strands run in opposite directions to form double helix. The forces that keep those strands together are hydrogen bond, hydrophobic interactions, van de Waal force, and charge-charge interactions. The H-bonds form between base pairs of the antiparallel strands. The base in the first strand forms an H-bond only with a specific base in the second strand. Those two bases form a base-pair (H-bond interaction that keeps strands together and form double helical structure). The base –pairs are: adenine-thymine (A-T), cytosine-guanine (C-G). Such interaction gives us the hint that nitrogen-containing bases are located inside of the DNA double helical structure, while sugars and phosphates are located outside of the double helical structure. The hydrophobic bases are inside the double helix of DNA.
The bases, located inside the double helix, are stacked one on the top of another. Stacking bases interact with each other through the Van der Waals force. Even though the van de Waal forces are week, sumation of those forces can be substantial. The distance between two neighboring bases that are perpendicular to the main axis is 3.4 A˚. DNA structure is repetitive. There are ten bases per turn, so every base has a 36° angle of rotation. The diameter of the double helix is approximately 20 A˚. The hydrophobic effect stabilizes the double helix. The structural variation in DNA is due to the different deoxyribose conformations, rotation about the contiguous bonds in the phosphodeoxyribose backbone, and free rotation about the C-1'- N (glycosyl bond).
The technique of southern blotting is often used to uncover the DNA sequence of a sample. The technique is named after Edwin Southern.
DNA Manipulation Techniques
When it comes to exploring genes and genomes, it depends on the technical tools that are used. The five important DNA manipulation techniques are:
1.Restriction Endonucleases - also known as restriction enzymes
The restriction of enzymes split the DNA into specific fragments. By having the DNA split into different pieces, it allows the manipulation of DNA segments.
2. Blotting Technique
To separate and characterize DNA, the Southern blotting technique is used. This technique is similar to the Western blot, except that Southern blotting is used for DNA and not RNA. This technique identifies a specific sequence of DNA by electrophoresis through an agarose gel. The DNA is separated by placing the large fragments on top and the small fragments at the bottom. Next, the DNA is transfer into the nitrocellulose sheet. Then a 32-p labeled DNA probe that is complementary to the sequence, is added to hybridize the fragments. Finally, a autoradiography film is use to view the fragment containing the sequence.
3. DNA Sequencing
By using the DNA sequencing technique, a precise nucleotide sequence of a DNA molecule can be determined. The key to DNA sequencing is the generation of DNA fragments whose length depends on the last base of the sequence. Even though there are different alternative methods, they all perform the same procedure on the four reaction mixtures.
A. Chain termination DNA Sequencing
A primer is always needed. To produce fragments, the addition of 2', 3'-dideoxy analog of a dNTP is added to each of the four mixtures. It will stop the sequence at that N-dideoxy. The types of dNTP that can be use are dATP, TTP, dCTP, dGTP. In the end, new DNA strands are separated to electrophoresis.
B. Fluorescence Detection of Bases
Fluorescent tag is used into each of the four chain-terminating dideoxy nucleotides at different wavelengths. It is an effective method because no radioactive reagents are used and large sequences of bases can be determined. The fragments get separated by having the mixture passed through high voltage. Then, the fragments are detected by their fluorescence, which the base sequence is based on the color sequence.
C. Top-down (Shotgun) Method of Genome Sequencing
The top-down method and the shotgun method are similar, the main difference is that the top-down requires a detailed map of the clones. The Shotgun randomly sequences large clones to match them computationally.
D. Microarrays(Green chips)
Using microarrays is useful when it comes to studying the expression of a large number of genes. The microarray is created by using either oligonucleotides or cDNA. Based on the fluorescent intensity, red or green marks will appear. If it is red, it means no fluorescence is present, known as gene induction. If it is green, fluorescence is expressed, known as gene repression.
4. DNA Synthesis
To synthesize DNA, a solid-phase method is used. The solid-phase synthesis is carried out by the phosphite triester method. In this process only one nucleotide is added in each group. The first step that takes place is the binding of the first nucleotide. Another nucleotide is added and activated and reacts with the 3' -phosphoramididte containing DMT. A deoxyribonucleoside 3' -phosphoramidite with DMT and βCE is attached because it has the ability to synthesize any DNA. It is also a basic nucleotide that is modified and protected. Then, the molecule gets oxidized to oxidized the phosphate group. In the end, the DMT is removed by addition of dichloroacetic acid. Overall, the desired product remains insoluble and it is release at the end.
5. Polymerase Chain Reaction (PCR)
PCR is a technique used that allows to amplify DNA sequence between two nucleotides. If the DNA sequence is known, millions of copies of that sequence can be obatained by using this technique. To carried out PCR, a DNA template, a precursor, and two complementary primers are needed. What makes the PCR unique is that the temperature is constantly changing within the three different stages and that the stages get repeated 25 times. The three stages are:
1. Denaturing - DNA gets denature from a double strand (parent DNA molecule) to two single strands by heating thesolution at 94°C.
2. Annealing - After letting the solution cooled, two synthetic oligonucleotide primers are added at the end of the 3' end of target strand, and at the 3' end of complementary strand. This process is done when the temperature is between 50°C - 60°C.
3. Polymerization - Addition of thermostable DNA polymerase to catalyze 5' to 3' DNA synthesis at 72°C.
Structural Variation occurs due to the different deoxyribose conformations, free rotation about the C-1, and rotation about the closest bond in phosphodeoxyribose backbones.
There are secondary structures when it comes to DNA which are forms A, B, and Z. A Form: 1. Right handed 2. Glycosyl bond conformation is ANTI 3. Needs 11 base pairs per helical turn 4. Size of diameter is about 26 angstroms 5. Sugar pucker conformation is at the C-3' endo.
B Form: 1. Like the A form, the B form is right handed. 2. Glycosyl bond formation is ANTI 3. Needs 10.5 base pairs per helical turn 4. Size of diameter is about 20 angstroms 5. Sugar pucker conformation is at the C-2' endo
Z Form: 1. Unlike the A and B form, the obvious difference is that the Z form is left handed. 2. Glycosyl bond formation consists of two components: pyrimidines and purines. ANTI (for pyrimidines) and SYN (for purines) 3. Needs 12 base pairs per helical turn 4. Size of diameter is about 18 angstroms 5. Sugar pucker conformation is at the C-2' endo (for pyrimidines) and C-3' endo (for purines)
A DNA library is a collection of cloned DNA fragments in a cloning vector that can be searched for a DNA of interest. If the goal is to isolate particular gene sequences, two types of library are useful.
Genomic DNA libraries
A genomic DNA library is made from the genomic DNA of an organism. For example, a mouse genomic library could be made by digesting mouse nuclear DNA with a restriction nuclease to produce a large number of different DNA fragments but all with identical cohesive ends. The DNA fragments would then be ligated into linearized plasmid vector molecules or into a suitable virus vector. This library would contain all of the nuclear DNA sequences of the mouse and could be searched for any particular mouse gene of interest. Each clone in the library is called a genomic DNA clone. Not every genomic DNA clone would contain a complete gene since in many cases the restriction enzyme will have cut at least once within the gene. Thus some clones will contain only a part of a gene.
A cDNA library is a library of mRNAs. It is made from introns and exons and a cDNA library is made to be able to isolate the genes/the final version of the gene.
A cDNA library i used to screen for colonies. If looking for a gene, you can screen the colonies, use the collection of plasmids, transform the bacteria, and use a probe. You can also use Southern Hybridization. By using an oligonucleotide that is complimentary to the gene you are looking for, and that will eventually tell you which colonies of bacteria will have the DNA that corresponds with the mRNA in the plasmids.
How to make a cDNA library:
1. Isolate mRNA from the cell.
2. Use reverse transcriptase and dNTPss so that from the original mRNA, a DNA copy can be created.
3. RNA is easier to degrade than DNA so put in alkali solution to degrade mRNA.
4. Use DNA polymerase to complete the template.
Ultimately, you end up with double stranded DNA, one of which is identical to the mRNA. After doing this all for mRNA, you can clone it in the plasmids. The collection of plasmids will include all of the mRNA but in the form of DNA.
Flow of Genetic Information
- Genetic information storage: genome
- Replication: DNA --> DNA
- Transcription: DNA --> RNA
- Translation: RNA --> Proteins
- Viadiu, Hector. "Making a cDNA Library." UCSD. Lecture. November 2012.
Berg , Jeremy . Biochemistry . 7. New York : W.H Freeman and Company , 2012. Print.
||This page or section is an undeveloped draft or outline.
You can help to develop the work, or you can ask for assistance in the project room.
Berg, Jeremy, Tymoczko J., Stryer, L.(2012). Protein Composition and Structure.Biochemistry(7th Edition). W.H. Freeman and Company. ISBN1-4292-2936-5
Hames, David. Hooper, Nigel. Biochemistry. Third edition. New York. Taylor and Francis Groups. 2005. | https://en.wikibooks.org/wiki/Structural_Biochemistry/Nucleic_Acid/DNA |
4.03125 | Difference Between Analog and Digital Signals
Analog vs Digital Signals
There are two types of signals that carry information – analog and digital signals. The difference between analog and digital signals is that analog is a continuous electrical signal, whereas digital is a non-continuous electrical signal.
Analog Signals vary in time, and the variations follow that of the non-electric signal. When compared to analog signals, digital signals change in individual steps and consist of pulses or digits. Analog signals are a model of the real quantity and the voice intensification that causes electric current variations. Digital signals have discrete levels, and the specified value of the pulse remains constant until the change in the next digit. There are two amplitude levels, which are called nodes, that are based on 1 or 0, true or false, and high or low.
Digital signals, similar to Morse code, are sent to a computer which interprets these into words. A digital signal, a 0 or 1, is sent through the phone line. For example, when you type the letter A into your computer, it converts it into 01000001. This 01000001 goes to other computer, which interprets it as A. A series of eight 0’s and 1’s is called a byte, whereas each 0 or 1 is called a bit.
The difference between digital and analog signals can also be understood by observing various examples of different waves, and how they work. In the 1800’s, analog waveforms were used in conjunction with copper to relay or transmit conversations. Since they have the tendency to catch distorting electromagnetic waves, or noise which drops the quality of the signal, they soon became troublesome and difficult to maintain. Soon the change from analog to digital occurred, because digital signals were easier to transmit and were more reliable compared to analog signals.
A signal is the transmission of data that we constantly deal with during our daily routine life. From telephones to cellular devices, and music to computers, signals are very important. With the advent of modern technology, telephone and computers etc have became a necessity, and the cost of analog signal transmission has become not only expensive, but troublesome. Digital signals soon replaced analog because they are simply different and uniformed, and not severely altered by noise or distortion. Almost all electronic devices use digital signals, because they remain accurate in shape and amplitude. Digital signals provide better continuous delivery, and are preferred over analog signals.
1. Analog signals can be converted into digital signals by using a modem.
2. Digital signals use binary values to send and receive data between computers.
3. Digital signals are easier and more reliable to transmit with fewer errors.
4. Analog signal are replicas of sound waves that can be distorted with noise and drop the quality of transmission.
5. Digital data has a faster rate of transmission when compared to analog, and gives better productivity.
Search DifferenceBetween.net :
Email This Post : If you like this article or our site. Please spread the word. Share it with your friends/family.
Leave a Response | http://www.differencebetween.net/technology/difference-between-analog-and-digital-signals/ |
4.03125 | Here are some easy tips to help you upgrade your childs supplies for high school. more
Improve handwriting with dot-to-dot worksheets
Dot-to-dot and counting
Working on a dot-to-dot teaches children number order and help with counting. Little ones may need a little help, but as they get older, completing a dot-to-dot all by themselves is a great confidence booster.
Dot-to-dot games are wonderful for improving hand-eye co-ordination. There's a lot of concentration that goes into completing a dot-to-dot! Visual motor control is developed through dot-to-dot work.
Doing dot-to-dot activities really helps improve handwriting skills and are a valuable pre-writing teaching tool. Children learn how to create shapes, focus their pencil and learn how much pressure to apply to the paper.
Fine motor skills
Working on a dot-to-dot is a great way to strengthen hand and finger muscles in preparation for writing. During early childhood is the optimal time to help develop vital muscles we'll be using throughout our life. Children can concentrate on gripping their pencil and strengthen their hands while working on dot-to-dot.
Concentration and focus are built through working on dot-to-dot drawings. Completing a dot-to-dot drawing shows the benefits of hard work - and in a fun way.
Find dot-to-dot printable worksheets:
Find more about handwriting:
- Handwriting development pre-kinder
- Handwriting development in 5-6 year olds
- Handwriting development in 7-8 year olds
- Handwriting development in 9-10 year olds
- Handwriting development in 11-12 year olds
- Games and printable worksheets to improve handwriting
- Why dot-to-dots improve handwriting
Find more articles about learning games:
- Reading games for fun
- Host your own spelling bee
- Learning games with Kidspot's spelling scrambler
handwriting with printable mazes
- Handwriting fun with dot-to-dots
- Fun teaching ideas to learn left from right
facts and learning games
- Subtraction learning games
- Multiplication facts and learning ideas
- How to teach division
- What cooking will teach our kids | http://www.kidspot.com.au/schoolzone/Maths-&-science-Learning-games-Improve-handwriting-with-dot-to-dot-worksheets+4200+316+article.htm |
4.0625 | Japanese occupation of Burma
|History of Myanmar|
The Japanese occupation of Burma refers to the period between 1942 and 1945 during World War II, when Burma was occupied by the Empire of Japan. The Japanese had assisted formation of the Burma Independence Army, and trained the Thirty Comrades, who were the founders of the modern Armed Forces (Tatmadaw). The Burmese hoped to gain support of the Japanese in expelling the British, so that Burma could become independent.
In 1942, during World War II, Japan invaded Burma and nominally declared Burma independent as the State of Burma on 1 August 1943. A puppet government led by Ba Maw was installed. However, it soon became apparent that the Japanese had no intention of giving independence to Burma.
Aung San, father of the opposition leader Aung San Suu Kyi, nationalist leaders formed the Anti-Fascist Organisation in August 1944, which asked the United Kingdom to form a coalition with other allies against the Japanese. By April 1945, the Allies had driven out the Japanese. Subsequently, negotiations began between the Burmese and the British for independence. Under Japanese occupation, 170,000 to 250,000 civilians died.
Some Burmese nationalists saw the outbreak of World War II as an opportunity to extort concessions from the British in exchange for support in the war effort. Other Burmese, such as the Thakin movement, opposed Burma's participation in the war under any circumstances. Aung San with other Thakins founded the Communist Party of Burma (CPB) in August 1939. Aung San also co-founded the People's Revolutionary Party (PRP), renamed the Socialist Party after World War II. He was also instrumental in founding the Freedom Bloc by forging an alliance of Dobama Asiayone, ABSU, politically active monks and Ba Maw's Poor Man's Party.
After Dobama Asiayone called for a national uprising, an arrest warrant was issued for many of the organisation's leaders including Aung San, who escaped to China. Aung San's intention was to make contact with the Chinese Communists but he was detected by the Japanese authorities who offered him support by forming a secret intelligence unit called the Minami Kikan, headed by Colonel Suzuki with the objective of closing the Burma Road and supporting a national uprising.
Aung San briefly returned to Burma to enlist twenty-nine young men who went to Japan with him to receive military training on Hainan, China, and they came to be known as the "Thirty Comrades". When the Japanese occupied Bangkok in December 1941, Aung San announced the formation of the Burma Independence Army (BIA) in anticipation of the Japanese invasion of Burma in 1942.
For Japan's military leadership, the conquest of Burma was a vital strategic objective upon the opening of hostilities with Britain and the United States. Occupation of Burma would interrupt a critical supply link to China. Also, the Japanese knew that rubber was one of the few militarily vital resources that the United States was not self-sufficient in. It was thought critical that the Allies be denied access to Southeast Asian rubber supplies if they were ever to accept peace terms favourable to Japan.
The BIA formed a provisional government in some areas of the country in the spring of 1942, but there were differences within the Japanese leadership over the future of Burma. While Colonel Suzuki encouraged the Thirty Comrades to form a provisional government, the Japanese military leadership had never formally accepted such a plan. Eventually, the Japanese Army turned to Ba Maw to form a government.
During the war in 1942, the BIA had grown in an uncontrolled manner, and in many districts officials and even criminals appointed themselves to the BIA. It was reorganised as the Burma Defence Army (BDA) under the Japanese but still headed by Aung San. While the BIA had been an irregular force, the BDA was recruited by selection and trained as a conventional army by Japanese instructors.
Ba Maw was afterwards declared head of state, and his cabinet included both Aung San as War Minister and the Communist leader Thakin Than Tun as Minister of Land and Agriculture as well as the Socialist leaders Thakins Nu and Mya. When the Japanese declared Burma, in theory, independent in 1943, the Burma Defence Army (BDA) was renamed the Burma National Army (BNA).
It soon became apparent that Japanese promises of independence were merely a sham and that Ba Maw was deceived. As the war turned against the Japanese, they declared Burma a fully sovereign state on 1 August 1943, but this was just another façade. Disillusioned, Aung San began negotiations with Communist leaders Thakin Than Tun and Thakin Soe, and Socialist leaders Ba Swe and Kyaw Nyein which led to the formation of the Anti-Fascist Organisation (AFO) in August 1944 at a secret meeting of the CPB, the PRP and the BNA in Pegu. The AFO was later renamed the Anti-Fascist People's Freedom League (AFPFL), and roundly opposed the Japanese fascism, proposing a fairer and more equal society.
Thakins Than Tun and Soe, while in Insein prison in July 1941, had co-authored the Insein Manifesto which, against the prevailing opinion in the Dobama movement, identified world fascism as the main enemy in the coming war and called for temporary co-operation with the British in a broad allied coalition which should include the Soviet Union. Soe had already gone underground to organise resistance against the Japanese occupation, and Than Tun was able to pass on Japanese intelligence to Soe, while other Communist leaders Thakins Thein Pe and Tin Shwe made contact with the exiled colonial government in Simla, India.
Massacre during the Occupation
The Japanese had entered the Kalagong village and rounded up all the inhabitants for questioning by the members of the 3rd Battalion, 215th Regiment and the OC Moulmein Kempeitai of the Imperial Japanese Army. These units had been ordered by Major General Seiei Yamamoto, chief of staff of the 33rd Army. An estimated 600 Burmese villagers died in the Kalagong massacre.
End of the Occupation
There were informal contacts between the AFO and the Allies in 1944 and 1945 through the British Force 136. On 27 March 1945, the Burma National Army rose up in a country-wide rebellion against the Japanese. 27 March had been celebrated as 'Resistance Day' until the military renamed it 'Tatmadaw (Armed Forces) Day'. Aung San and others subsequently began negotiations with Lord Mountbatten and officially joined the Allies as the Patriotic Burmese Forces (PBF). At the first meeting, the AFO represented itself to the British as the provisional government of Burma with Thakin Soe as Chairman and Aung San as a member of its ruling committee.
The Japanese were routed from most of Burma by May 1945. Negotiations then began with the British over the disarming of the AFO and the participation of its troops in a post-war Burma Army. Some veterans had been formed into a paramilitary force under Aung San, called the Pyithu yèbaw tat or People's Volunteer Organisation (PVO), and were openly drilling in uniform. The absorption of the PBF was concluded successfully at the Kandy conference in Ceylon in September 1945.
- Battle of Meiktila / Mandalay
- Battle of the Admin Box
- Burma Campaign
- Burma Road
- China Burma India Theater of World War II
- Force 136
- Japanese conquest of Burma
- Japanese invasion money (Burma)
- Louis Mountbatten, 1st Earl Mountbatten of Burma
- Merrill's Marauders
- Operation Capital
- Operation Dracula
- State of Burma
- William Slim, 1st Viscount Slim
- Women's Auxiliary Service (Burma)
- Michael Clodfelter. Warfare and Armed Conflicts: A Statistical Reference to Casualty and Other Figures, 1500–2000. 2nd Ed. 2002 ISBN 0-7864-1204-6. p. 556
- Werner Gruhl, Imperial Japan's World War Two, 1931–1945 Transaction 2007 ISBN 978-0-7658-0352-8 (Werner Gruhl is former chief of NASA's Cost and Economic Analysis Branch with a lifetime interest in the study of the First and Second World Wars.)
- Martin Smith (1991). Burma - Insurgency and the Politics of Ethnicity. London and New Jersey: Zed Books. pp. 49,91,50,53,54,56,57,58–59,60,61,60,66,65,68,69,77,78,64,70,103,92,120,176,168–169,177,178,180,186,195–197,193,,202,204,199,200,270,269,275–276,292–3,318–320,25,24,1,4–16,365,375–377,414.
- Robert H. Taylor (1987). The state in Burma. C. Hurst & Co. Publishers. p. 284.
- Newell, Clayton R. Burma, 1942. World War II Campaign Brochures. Washington D.C.: United States Army Center of Military History. CMH Pub 72-21.
- Hogan, David W. India-Burma. World War II Campaign Brochures. Washington D.C.: United States Army Center of Military History. CMH Pub 72-5.
- MacGarrigle, George L. Central Burma. World War II Campaign Brochures. Washington D.C.: United States Army Center of Military History. CMH Pub 72-37. | https://en.wikipedia.org/wiki/Japanese_occupation_of_Burma |
4.0625 | Sight-reading Teacher Resources
Find Sight-reading educational ideas and activities
Showing 1 - 20 of 51 resources
SIGHT READING RHYTHM PATTERNS
The perception of rhythms by reading and the ability to auditorily discriminate these rhythm patterns by listening to them performed by the teacher is practiced here. your students will work to create an eight-beat long rhythm pattern.
1st - 2nd Visual & Performing Arts
Music Tutor (Sight Reading Improver)
Elegant in its simplicity, this app accomplishes precisely what it sets out to do: improving the user's sight reading of musical notes. Taking the concept of flashcards to the next level, the designers also add in the element of sound so...
1st - Higher Ed Visual & Performing Arts
Keyboard and the Grand Staff
Young scholars explore the treble and bass staffs, naming the lines, spaces, and locating middle C. Individually, students complete a word game and then compare their answers with a partner. They sight-read the grand staff and explore...
6th - 8th Visual & Performing Arts
The Day The Earth Stood Still: The Filmmaking Process
How are films made? As part of their study of film, middle schoolers investigate the pre-production, production, and post-production process and consider the role of the director, the screenwriter, production designer, cinematographer,...
6th - 10th Visual & Performing Arts CCSS: Adaptable
The American Five - Pentatonic scales in early American melodies
Through vocal warm-ups and exercises, budding musicians will attempt to grasp the five pentatonic scales, commonly used in early American songs. They'll sing and work to identify the pitch, tone, melody, and scales being expressed in the...
9th - 12th Visual & Performing Arts
Musical Sleuth: Musical Manuscript
Seventh graders explore an original musical manuscript. In this music instructional activity students look at an original musical manuscript and analyze the notation system that the composer used. The students also transcribe part of the...
7th Visual & Performing Arts
Admirable Armonica Admirers
What do Ben Franklin and Wolfgang Mozart have in common? Find out about the musical invention, the armonica or glassy-chord. Learners will read about how Ben Franklin invented this new instrument and how Wolfgang Mozart came to play it....
1st - 6th Visual & Performing Arts | http://www.lessonplanet.com/lesson-plans/sight-reading |
4.0625 | Charles Annenberg Weingarten, Tom Pollock, Roger Jackson, Liz Marks, explore.org
Video length is 24:06 min.Learn more about Teaching Climate Literacy and Energy Awareness»
See how this Video supports the Next Generation Science Standards»
Middle School: 8 Disciplinary Core Ideas
High School: 5 Disciplinary Core Ideas
About Teaching Climate Literacy
Other materials addressing 4f
Other materials addressing 6c
Other materials addressing 6d
Notes From Our Reviewers
The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness.
Read what our review team had to say about this resource below or learn more about
how CLEAN reviews teaching materials
Teaching Tips | Science | Pedagogy |
- While this is a long video (>24 minutes), educators will find it useful in conveying the human dimensions of climate change.
- Teachers could break the video into chunks and conduct class discussions.
About the Science
- Both anecdotal and expert evidence is given.
- Comments from expert scientist: The video beautifully reveals some startling facts of rapidly changing climatic conditions of the Arctic and its direct effect on the local human population, the Inuits of the Canadian Arctic and Greenland. The presence of inhabitants and the large watermass (Arctic Ocean) at the north polar region and makes the place even more sensitive to climate change as compared to its southern counterpart the Antarctica.
About the Pedagogy
- This video is unique in that it uses an interdisciplinary approach to describe how climate change is impacting oceans, land, ecosystems, and cultures. Scientists and native people's perspectives are presented in a seamless manner.
- There are discussion questions, and background material are at http://files.explore.org/files/explore-LP_Arctic-Science_.pdf. There are also some background links on the webpage with the video.
Technical Details/Ease of Use
- This is both a beautiful and disturbing video. It uses maps effectively to show where the host and speakers are.
- Requires a login to download video.
- The audio was somewhat quiet in places.
Related URLs These related sites were noted by our reviewers but have not been reviewed by CLEANhttp://files.explore.org/files/explore-LP_Arctic-Science_.pdf
Next Generation Science Standards See how this Video supports:
Disciplinary Core Ideas: 8
MS-LS2.C1:Ecosystems are dynamic in nature; their characteristics can vary over time. Disruptions to any physical or biological component of an ecosystem can lead to shifts in all its populations.
MS-LS2.C2:Biodiversity describes the variety of species found in Earth’s terrestrial and oceanic ecosystems. The completeness or integrity of an ecosystem’s biodiversity is often used as a measure of its health
MS-LS4.D1:Changes in biodiversity can influence humans’ resources, such as food, energy, and medicines, as well as ecosystem services that humans rely on—for example, water purification and recycling.
MS-ESS2.C1:Water continually cycles among land, ocean, and atmosphere via transpiration, evaporation, condensation and crystallization, and precipitation, as well as downhill flows on land.
MS-ESS2.C2:The complex patterns of the changes and the movement of water in the atmosphere, determined by winds, landforms, and ocean temperatures and currents, are major determinants of local weather patterns.
MS-ESS2.D1:Weather and climate are influenced by interactions involving sunlight, the ocean, the atmosphere, ice, landforms, and living things. These interactions vary with latitude, altitude, and local and regional geography, all of which can affect oceanic and atmospheric flow patterns.
MS-ESS3.C1:Human activities have significantly altered the biosphere, sometimes damaging or destroying natural habitats and causing the extinction of other species. But changes to Earth’s environments can have different impacts (negative and positive) for different living things.
MS-ESS3.D1:Human activities, such as the release of greenhouse gases from burning fossil fuels, are major factors in the current rise in Earth’s mean surface temperature (global warming). Reducing the level of climate change and reducing human vulnerability to whatever climate changes do occur depend on the understanding of climate science, engineering capabilities, and other kinds of knowledge, such as understanding of human behavior and on applying that knowledge wisely in decisions and activities.
Disciplinary Core Ideas: 5
HS-ESS2.D1:The foundation for Earth’s global climate systems is the electromagnetic radiation from the sun, as well as its reflection, absorption, storage, and redistribution among the atmosphere, ocean, and land systems, and this energy’s re-radiation into space.
HS-ESS3.C1:The sustainability of human societies and the biodiversity that supports them requires responsible management of natural resources.
HS-ESS3.D1:Though the magnitudes of human impacts are greater than they have ever been, so too are human abilities to model, predict, and manage current and future impacts.
HS-LS2.C2:Moreover, anthropogenic changes (induced by human activity) in the environment—including habitat destruction, pollution, introduction of invasive species, overexploitation, and climate change—can disrupt an ecosystem and threaten the survival of some species.
HS-LS4.D2:Biodiversity is increased by the formation of new species (speciation) and decreased by the loss of species (extinction). | http://cleanet.org/resources/44662.html |
4.21875 | Encyclopedia of the Great Black Migrationby Steven Reich
Describes the movement of Southern African Americans to the urban North and West in the broadest social, economic, cultural, and most importantly, political context. Entries provide students and researchers with information about the key people, places, organizations, and events that defined the era of the migration, from 1900 to the 1990s. Each entry provides… See more details below
Describes the movement of Southern African Americans to the urban North and West in the broadest social, economic, cultural, and most importantly, political context. Entries provide students and researchers with information about the key people, places, organizations, and events that defined the era of the migration, from 1900 to the 1990s. Each entry provides cross-listings to related entries, suggested readings for further information, and refers readers to relevant Web sites and archival collections. The encyclopedia draws on the expertise of leading scholars in African American history, providing entries that incorporate the interpretations and insights of recent scholarship. Encyclopedia contributors portray the migrants not as composite characters, but as individuals enmeshed in a complex web of relationships who negotiated difficult circumstances and assumed enormous risks to migrate. Migrants did not shed their southern past and become northerners as soon as they arrived at Chicago's Union Station. Rather, the migration of black southerners to the North that began during the World War I era was part of a much larger and longer process by which southern blacks had long migrated within the South in search of social, economic, and political justice. Understanding the Great Migration partly as a critical chapter in the history of the South, the encyclopedia devotes space to the social, economic, and political conditions in the South prior to World War I. It also examines how war and migration transformed the South as profoundly as it changed the dynamics of life in the North. Since nearly half of those who migrated north during the period did so during the era of the Great War,several entries emphasize how America's mobilization for World War I not only fostered the migration but sharpened black critiques of the social and political order of the era. Entries on the draft, military service, changing labor markets, and the uneven expansion of federal power, for example, demonstrate how black Americans-- migrants, industrial workers, farmers, domestic servants, men and women, political organizers, and editors--spied possibilities for meaningful change in the era of the First World War. Other entries capture ways in which the war and migration opened fissures and debates within local black communities, South and North; describe the extent and intensity of white, conservative reaction to the migration; explore the family dynamics of the migration; and identify the multiple concerns in addition to the search for work that confronted migrants: finding places to live, establishing childcare arrangements, seeking a place to worship, and maintaining long-distance kinship networks. Other entries convey how blacks described these years through song, art, and fiction and explain the ways in which black migrants encountered not only new worlds of work and politics, but new worlds of leisure and consumption.
- Greenwood Publishing Group, Incorporated
- Publication date:
- Greenwood Milestones in African American History Series
Meet the Author
STEVEN A. REICH is Associate Professor of History at James Madison University. He received his Ph.D. in 1998 from Northwestern University in Evanston, Illinois. He was the 1998/1999 Summerlee Research Fellow at the Clements Center for Southwest Studies at Southern Methodist University before coming to James Madison University. He is the author of "Soldiers of Democracy: Black Texans and the Fight for Citizenship, 1917-1921," which appeared in the Journal of American History in 1996 and won the Organization of American Historians' Louis M. Pelzer Memorial Award. He is currently completing a book-length study of the social world of early twentieth-century southern lumber camps and sawmills.
and post it to your social network
Most Helpful Customer Reviews
See all customer reviews > | http://www.barnesandnoble.com/w/encyclopedia-of-the-great-black-migration-steven-a-reich/1103025649 |
4.25 | |This article needs additional citations for verification. (July 2008)|
Norms are concepts (sentences) of practical import, oriented to effecting an action, rather than conceptual abstractions that describe, explain, and express. Normative sentences imply "ought-to" types of statements and assertions, in distinction to sentences that provide "is" types of statements and assertions. Common normative sentences include commands, permissions, and prohibitions; common normative abstract concepts include sincerity, justification, and honesty. A popular account of norms describes them as reasons to take action, to believe, and to feel.
Types of norms
Orders and permissions express norms. Such norm sentences do not describe how the world is, they rather prescribe how the world should be. Imperative sentences are the most obvious way to express norms, but declarative sentences also may be norms, as is the case with laws or 'principles'. Generally, whether an expression is a norm depends on what the sentence intends to assert. For instance, a sentence of the form "All Ravens are Black" could on one account be taken as descriptive, in which case an instance of a white raven would contradict it, or alternatively "All Ravens are Black" could be interpreted as a norm, in which case it stands as a principle and definition, so 'a white raven' would then not be a raven.
Those norms purporting to create obligations (or duties) and permissions are called deontic norms (see also deontic logic). The concept of deontic norm is already an extension of a previous concept of norm, which would only include imperatives, that is, norms purporting to create duties. The understanding that permissions are norms in the same way was an important step in ethics and philosophy of law.
In addition to deontic norms, many other varieties have been identified. For instance, some constitutions establish the national anthem. These norms do not directly create any duty or permission. They create a "national symbol". Other norms create nations themselves or political and administrative regions within a nation. The action orientation of such norms is less obvious than in the case of a command or permission, but is essential for understanding the relevance of issuing such norms: When a folk song becomes a "national anthem" the meaning of singing one and the same song changes; likewise, when a piece of land becomes an administrative region, this has legal consequences for many activities taking place on that territory; and without these consequences concerning action, the norms would be irrelevant. A more obviously action-oriented variety of such constitutive norms (as opposed to deontic or regulatory norms) establishes social institutions which give rise to new, previously inexistent types of actions or activities (a standard example is the institution of marriage without which "getting married" would not be a feasible action; another is the rules constituting a game: without the norms of soccer, there would not exist such an action as executing an indirect free kick).
Any convention can create a norm, although the relation between both is not settled.
There is a significant discussion about (legal) norms that give someone the power to create other norms. They are called power-conferring norms or norms of competence. Some authors argue that they are still deontic norms, while others argue for a close connection between them and institutional facts (see Raz 1975, Ruiter 1993).
Games completely depend on norms. The fundamental norm of many games is the norm establishing who wins and loses. In other games, it is the norm establishing how to score points.
One major characteristic of norms is that, unlike propositions, they are not descriptively true or false, since norms do not purport to describe anything, but to prescribe, create or change something. Some people say they are "prescriptively true" or false. Whereas the truth of a descriptive statement is purportedly based on its correspondence to reality, some philosophers, beginning with Aristotle, assert that the (prescriptive) truth of a prescriptive statement is based on its correspondence to right desire. Other philosophers maintain that norms are ultimately neither true or false, but only successful or unsuccessful (valid or invalid), as their propositional content obtains or not (see also John Searle and speech act).
There is an important difference between norms and normative propositions, although they are often expressed by identical sentences. "You may go out." usually expresses a norm if it is uttered by the teacher to one of the students, but it usually expresses a normative proposition if it is uttered to one of the students by one of his or her classmates. Some ethical theories reject that there can be normative propositions, but these are accepted by cognitivism. One can also think of propositional norms; assertions and questions arguably express propositional norms (they set a proposition as asserted or questioned).
Another purported feature of norms, it is often argued, is that they never regard only natural properties or entities. Norms always bring something artificial, conventional, institutional or "unworldly". This might be related to Hume's assertion that it is not possible to derive ought from is and to G.E. Moore's claim that there is a naturalistic fallacy when one tries to analyse "good" and "bad" in terms of a natural concept. In aesthetics, it has also been argued that it is impossible to derive an aesthetical predicate from a non-aesthetical one. The acceptability of non-natural properties, however, is strongly debated in present day philosophy. Some authors deny their existence, some others try to reduce them to natural ones, on which the former supervene.
Other thinkers (Adler, 1986) assert that norms can be natural in a different sense than that of "corresponding to something proceeding from the object of the prescription as a strictly internal source of action". Rather, those who assert the existence of natural prescriptions say norms can suit a natural need on the part of the prescribed entity. More to the point, however, is the putting forward of the notion that just as descriptive statements being considered true are conditioned upon certain self-evident descriptive truths suiting the nature of reality (such as: it is impossible for the same thing to be and not be at the same time and in the same manner), a prescriptive truth can suit the nature of the will through the authority of it being based upon self-evident prescriptive truths (such as: one ought to desire what is really good for one and nothing else).
Recent works maintain that normativity has an important role in several different philosophical subjects, not only in ethics and philosophy of law (see Dancy, 2000).
- Deontic logic
- Law (principle)
- Norm (sociology)
- Normative ethics
- Philosophy of law
- Rule of law
- Rule according to higher law
- Speech act
- Adler, Mortimer (1985), Ten Philosophical Mistakes, MacMillan, New York.
- Aglo, John (1998), Norme et symbole: les fondements philosophiques de l'obligation, L'Harmattan, Paris.
- Aglo, John (2001), Les fondements philosophiques de la morale dans une société à tradition orale, L'Harmattan, Paris.
- Alexy, Robert (1985), Theorie der Grundrechte, Suhrkamp, Frankfurt a. M.. Translation: A Theory of Constitutional Rights, Oxford University Press, Oxford: 2002.
- Bicchieri, Cristina (2006), The Grammar of Society: the Nature and Dynamics of Social Norms, Cambridge University Press, Cambridge.
- Dancy, Jonathan (ed) (2000), Normativity, Blackwell, Oxford.
- Garzón Valdés, Ernesto et al. (eds) (1997), Normative Systems in Legal and Moral Theory: Festschrift for Carlos E. Alchourrón and Eugenio Bulygin, Duncker & Humblot, Berlin.
- Korsgaard, Christine (2000), The Sources of Normativity, Cambridge University, Cambridge.
- Raz, Joseph (1975, 1990), Practical Reason and Norms, Oxford University Press, Oxford; 2nd edn 1990.
- Rosen, Bernard (1999), The Centrality of Normative Ethical Theory, Peter Lang, New York.
- Ruiter, Dick (1993), Institutional Legal Facts: Legal Powers and their Effects, Kluwer, Dordrecht.
- von Wright, G. H. (1963), Norm and Action: a Logical Enquiry, Routledge & Kegan Paul, London. | https://en.wikipedia.org/wiki/Norm_(philosophy) |
4.25 | Active ListeningSkip to the navigation
Active listening is a dynamic process that includes:
- Paying attention to what another person is saying.
- Thinking about what the person has just said.
- Responding in a way that lets the person know that you understood what he or she was trying to say.
Hearing is different from listening. Hearing is a physical process. A person can hear what another person is saying without listening to the message.
Listening is an active process of thinking about the meaning of the message that was heard. Sometimes two people do not interpret what they hear in the same way. A person's interpretation may vary according to personal values, beliefs, and past experiences.
Active listening requires the listener to check with the speaker to make sure that the message is interpreted in the way it was intended. To listen actively, a person needs to pay attention to the behaviors and tone of the speaker.
Active listening takes practice. When you want to actively listen to someone:
- Provide privacy. When a person wants to talk about someone important to him or her, privacy may be essential. Find a quiet corner if no private place is available and talk in a low voice to help the person feel secure. Teens in particular need to feel that their conversations about important matters are kept private and confidential.
- Reduce distractions. When listening to a person speak, turn off radios, televisions, and other noisy devices. Remove any articles that may distract you or the speaker. Do not try to do other things while you are listening.
- Be present. Being present means listening to what the other person says and accepting the other person's thoughts and feelings even when they are different from yours. Being present also means not thinking about other things while the person is talking and resisting any urge to interrupt, judge, or argue with the speaker about his or her views.
- Show that you are listening. Nod your head periodically and show your interest in what's being said by saying "please continue," "yes," or "tell me more."
When actively listening to a teen, it is important to understand that teens often think others are watching and judging them. They may need reassurance that you are listening and that you are not judging them. It is also important to be genuine with teens. They can spot an insincere adult. Do not try to be a buddy with a teen. Teens do not like it when adults in their lives try to act like teens themselves.
When listening to teens, pay close attention to how the teen is describing the situation. Make a mental note if you think he or she does not understand what is happening. When the timing seems right, clarify any misunderstandings the teen has about the situation.
Health Tools help you make wise health decisions or take action to improve your health.
Primary Medical Reviewer Anne C. Poinier, MD - Internal Medicine
Specialist Medical Reviewer Sidney Zisook, MD - Psychiatry
Current as ofFebruary 20, 2015
To learn more about Healthwise, visit Healthwise.org
© 1995-2015 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. | http://www.uwhealth.org/health/topic/special/listen-and-respond/aa129058.html |
4.1875 | Over the years many hints have emerged that there might be life beyond Earth. New Scientist looks at 10 of the most hotly-debated discoveries.
Tests performed on Martian soil samples by NASA’s Viking landers hinted at chemical evidence of life. One experiment mixed soil with radioactive-carbon-labelled nutrients and then tested for the production of radioactive methane gas.
The test reported a positive result. The production of radioactive methane suggested that something in the soil was metabolising the nutrients and producing radioactive gas. But other experiments on board failed to find any evidence of life, so NASA declared the result a false positive.
Despite that, one of the original scientists – and others who have since re-analysed the data – still stand by the finding. They argue that the other experiments on board were ill-equipped to search for evidence of the organic molecules – a key indicator of life.
In August 1977 an Ohio State University radio telescope detected an unusual pulse of radiation from somewhere near the constellation Sagittarius. The 37-second-long signal was so startling that an astronomer monitoring the data scrawled “Wow!” on the telescope’s printout.
The signal was within the band of radio frequencies where transmissions are internationally banned on Earth. Furthermore, natural sources of radiation from space usually cover a wider range of frequencies.
As the nearest star in that direction is 220 million light years away, either a massive astronomical event – or intelligent aliens with a very powerful transmitter would have had to have created it. The signal remains unexplained.
NASA scientists controversially announced in 1996 that they had found what appeared to be fossilised microbes in a potato-shaped lump of Martian rock. The meteorite was probably blasted off the surface of Mars in a collision, and wandered the solar system for some 15 million years, before plummeting to Antarctica, where it was discovered in 1984.
Careful analysis revealed that the rock contained organic molecules and tiny specs of the mineral magnetite, sometimes found in Earth bacteria. Under the electron microscope, NASA researchers also claimed to have spotted signs of “nanobacteria”.
But since then much of the evidence has been challenged. Other experts have suggested that the particles of magnetite were not so similar to those found in bacteria after all, and that contaminants from Earth are the source of the organic molecules. A 2003 study also showed how crystals that resemble nanobacteria could be grown in the laboratory by chemical processes.
In 1961 US radio astronomer Frank Drake developed an equation to help estimate the number of planets hosting intelligent life – and capable of communicating with us – in the galaxy.
The Drake equation multiplies together seven factors including: the formation rate of stars like our Sun, the fraction of Earth-like planets and the fraction of those on which life develops. Many of these figures are open to wide debate, but Drake himself estimates the final number of communicating civilisations in the galaxy to be about 10,000.
In 2001, a more rigorous estimate of the number of life-bearing planets in the galaxy – using new data and theories – came up with a figure of hundreds of thousands. For the first time, the researchers estimated how many planets might lie in the “habitable zone” around stars, where water is liquid and photosynthesis possible. The results suggest that an inhabited Earth-like planet could be as little as a few hundred light years away.
Alien microbes might be behind Europa’s red tinge, suggested NASA researchers in 2001. Though the surface is mostly ice, data shows it reflects infrared radiation in an odd manner. That suggests that something – magnesium salts perhaps – are binding it together. But no one has been able to come up with the right combination of compounds to make sense of the data.
Intriguingly, the infrared spectra of some Earthly bacteria – those that thrive in extreme conditions – fits the data at least as well as magnesium salts. Plus, some are red and brown in colour, perhaps explaining the moon’s ruddy complexion. Though bacteria might find it difficult to survive in the scant atmosphere and -170°C surface temperature of Europa, they might survive in the warmer liquid interior. Geological activity could then spew them out periodically to be flash frozen on the surface.
In 2002 Russian astrobiologists claimed that super-hardy Deinococcus radiourans evolved on Mars. The microbe can survive several thousand times the radiation dose that would kill a human.
The Russians zapped a population of the bacteria with enough radiation to kill 99.9%, allowed the survivors to repopulate, before repeating the cycle. After 44 rounds it took 50 times the original dose of radiation. They calculated that it would take many thousands of these cycles to make common microbe E.coli as resilient as Deinococcus. And on Earth it takes between a million and 100 million years to encounter each dose of radiation. Therefore there just has not been enough time in life’s 3.8 billion year history on Earth for such resistance to have evolved, they claim.
By contrast, the surface of Mars, unprotected by a dense atmosphere, is bombarded with so much radiation that the bugs could receive the same dose in just a few hundred thousand years. The researchers argue that Deinococcus’s ancestors were flung off of Mars by an asteroid and fell to Earth on meteorites. Other experts remain sceptical.
Life in Venus’ clouds may be the best way to explain some curious anomalies in the composition of its atmosphere, claimed University of Texas astrobiologists in 2002. They scoured data from NASA’s Pioneer and Magellan space probes and from Russia’s Venera Venus-lander missions of the 1970s.
Solar radiation and lightning should be generating masses of carbon monoxide on Venus, yet it is rare, as though something is removing it. Hydrogen sulphide and sulphur dioxide are both present too. These readily react together, and are not usually found co-existing, unless some process constantly is churning them out. Most mysterious is the presence of carbonyl sulphide. This is only produced by microbes or catalysts on Earth, and not by any other known inorganic process.
The researchers’ suggested solution to this conundrum is that microbes live in the Venusian atmosphere. Venus’s searing hot, acidic surface may be prohibitive to life, but conditions 50 kilometres up in the atmosphere are more hospitable and moist, with a temperature of 70°C and a pressure similar to Earth.
In 2003, Italian scientists hypothesised that sulphur traces on Europa might be a sign of alien life. The compounds were first detected by the Galileo space probe, along with evidence for a volcanically-warmed ocean beneath the moon’s icy crust.
The sulphur signatures look similar to the waste-products of bacteria, which get locked into the surface ice of lakes in Antarctica on Earth. The bacteria survive in the water below, and similar bacteria might also thrive below Europa’s surface, the researchers suggest. Others experts rejected the idea, suggesting that the sulphur somehow originates from the neighbouring moon Io, where it is found in abundance.
In 2004 three groups – using telescopes on Earth and the European Space Agency’s Mars Express orbiting space probe – independently turned up evidence of methane in the atmosphere. Nearly all methane in our own atmosphere is produced by bacteria and other life.
Methane could also be generated by volcanism, the thawing of frozen underground deposits, or delivered by comet impacts. However, the source has to be recent, as the gas is rapidly destroyed on Mars or escapes into space.
In January 2005, an ESA scientist controversially announced that he had also found evidence of formaldehyde, produced by the oxidation of methane. If this is proved it will strengthen the case for microbes, as a whopping 2.5 million tonnes of methane per year would be required to create the quantity of formaldehyde postulated to exist.
There are ways to confirm the presence of the gas, but scientists will need to get the equipment to Mars first.
In February 2003, astronomers with the search for extraterrestrial intelligence (SETI) project, used a massive telescope in Puerto Rico to re-examine 200 sections of the sky which had all previously yielded unexplained radio signals. These signals had all disappeared, except for one which had become stronger.
The signal – widely thought to be the best candidate yet for an alien contact – comes from a spot between the constellations Pisces and Aries, where there are no obvious stars or planets. Curiously, the signal is at one of the frequencies that hydrogen, the most common element, absorbs and emits energy. Some astronomers believe that this is a very likely frequency at which aliens wishing to be noticed would transmit.
Nevertheless, there is also a good chance the signal is from a never-seen-before natural phenomenon. For example, an unexplained pulsed radio signal, thought to be artificial in 1967, turned out to be the first ever sighting of a pulsar.
More on these topics: | https://www.newscientist.com/article/dn9943-top-10-controversial-pieces-of-evidence-for-extraterrestrial-life |
4.125 | Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.
Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.
The GI tract is composed of four layers. Each layer has different tissues and functions. From the inside out they are called: mucosa, submucosa, muscularis, and serosa and . The mucosa is the absorptive and secretory layer. It is composed of simple epithelium cells and a thin connective tissue. There are specialized goblet cells that secrete mucus throughout the GI tract located within the mucosa. On the mucosa layer there are villi and microvilli.
The mucosa is the innermost layer of the gastrointestinal tract that is surrounding the lumen, or open space within the tube. This layer comes in direct contact with digested food (chyme). The mucosa is made up of three layers: epithelium, lamina propria, and muscularis mucosae. The epithelium is the innermost layer and it is responsible for most digestive, absorptive, and secretory processes. The lamina propria is a layer of connective tissue that is unusually cellular compared to most connective tissue. The muscularis mucosae is a thin layer of smooth muscle and its function is still under debate.
The mucosae are highly specialized in each organ of the gastrointestinal tract to deal with the different conditions. The most variation is seen in the epithelium. In the oesophagus, the epithelium is stratified, squamous, and non-keratinizing, for protective purposes. In the stomach it is simple columnar and is organized into gastric pits and glands to deal with secretion. The gastro-oesophageal junction is extremely abrupt. The small intestine epithelium (particularly the ileum) is specialized for absorption; it is organized into plicae circulares and villi, and the enterocytes have microvilli. This creates a brush border which greatly increases the surface area for absorption. The epithelium is simple columnar with microvilli. In the ileum there are occasionally Peyer's patches in the lamina propria. The colon has simple columnar epithelium with no villi. There are goblet cells. The appendix has a mucosa resembling the colon but is heavily infiltrated with lymphocytes. The ano-rectal junction (at the pectinate line) is again very abrupt; there is a transition from simple columnar to stratified squamous non-keratinizing epithelium (as in the oesophagus) for protective purposes.
3 layers: epithelium (digest, absorb, secrete); lamina propria (connective); muscularis mucosae, stratified squamous epithelium has glands; simple columnar epithelium is for protective purposes, mucosal epithelium differs across the segments of the GI tract: stratified squamous-simple columnar, or innermost layer of the GI tract surrounding the lumen and in direct contact with food | https://www.boundless.com/physiology/textbooks/299/the-digestive-system-23/layers-of-the-gi-tract-219/mucosa-1072-1756/ |
4.1875 | This ingenious device, designed by Herman von Helmholtz (1821-1894), was the very first sound synthesizer: a tool for studying and artificially recreating musical tones and the sounds of human speech.
Suppose I sing the word 'car' and then on the same note sing 'we'. The two vowel sounds will be similar in so far as they have the same pitch, yet they have a clearly distinct sound quality, or timbre. What is it that accounts for this difference, and the timbres of musical sounds in general? Helmholtz set out to answer this very question in the mid 19th century, building on the work of the Dutch scientist Franz Donders (1818-1889).
Helmholtz showed that the timbre of musical notes, and vowel sounds, is a result of their complexity: just as seemingly-pure white light actually contains all the colors of the rainbow, clearly defined musical notes are composed of many different tones. If you play the A above middle C on an organ, for example, the sound you hear has a clearly defined "fundamental" pitch of 440Hz . But the sound does not only contain a simple "fundamental" vibration at 440Hz, but also a "harmonic series" of whole number multiples of this frequency called "overtones" (i.e., 880Hz, 1320Hz, 1760Hz, etc.). Helmholtz proved, using his synthesizer, that it is this combination of overtones at varying levels of intensity that give musical tones, and vowel sounds, their particular sound quality, or timbre.
Helmholtz's apparatus uses tuning forks, renowned for their very pure tone, to generate a fundamental frequency and the first six overtones which may then be combined in varying proportions. The tuning forks are made to vibrate using electromagnets and the sound of each fork may be amplified by means of a Helmholtz resonator with adjustable shutter operated mechanically by a keyboard.
By varying the relative intensities of the overtones, Helmholtz was able to simulate sounds of various timbres and, in particular, recreate and understand the nature of the vowel sounds of human speech and singing. Vowel sounds are created by the resonances of the vocal tract, with each vowel defined by two or three resonant frequencies known as formants. When we say or sing 'a' (as in 'had'), for instance, the vocal tract amplifies frequencies close to 800Hz, 1800Hz and 2400Hz amongst others. When we require a different vowel sound, the muscles of the throat and mouth change the shape of the vocal tract, producing a different set of resonances.
Torben Rees, 'Helmholtz's apparatus for the synthesis of sound: an electrical 'talking machine'', Explore Whipple Collections, Whipple Museum of the History of Science, University of Cambridge, 2010 [http://www.hps.cam.ac.uk/whipple/explore/acoustics/hermanvonhelmholtz/helmholtzssynthesizer/, accessed 12 February 2016] | http://www.hps.cam.ac.uk/whipple/explore/acoustics/hermanvonhelmholtz/helmholtzssynthesizer/ |
4.03125 | Consequences of the Black Death
Consequences of the Plague included a series of religious, social and economic upheavals, which had profound effects on the course of European history. The Black Death was one of the most devastating pandemics in human history, peaking in Europe between 1347 and 1350 with 30–95 percent of the entire population killed. It reduced world population from an estimated 450 million to between 350 and 375 million in the 14th century. It took 150 and in some areas more than 250 years for Europe's population to recover.
From the perspective of the survivors, however, the impact was much more benign, for their labor was in higher demand. Hilton has argued that those English peasants who survived found their situation to be much improved. For English peasants the fifteenth century was a golden age of prosperity and new opportunities. Land was plentiful, wages high, and serfdom had all but disappeared. A century later, as population growth resumed, the peasants again faced deprivation and famine.
Figures for the death toll vary widely by area and from source to source as new research and discoveries come to light. It killed an estimated 75–430 million people in the 14th century. According to medieval historian Philip Daileader in 2007:
The trend of recent research is pointing to a figure more like 45% to 50% of the European population dying during a four-year period. There is a fair amount of geographic variation. In Mediterranean Europe and Italy, the South of France and Spain, where plague ran for about four years consecutively, it was probably closer to 75% to 80% of the population. In Germany and England it was probably closer to 20%.
Estimates of the demographic impact of the plague in Asia are based on both population figures during this time and estimates of the disease's toll on population centers. The initial outbreak of plague in the Chinese province of Hubei in 1334 claimed up to eighty percent of the population. China had several epidemics and famines from 1200 to the 1350s and its population decreased from an estimated 125 million to 65 million in the late 14th century.
It is estimated that between one-quarter and one-third of the European population (20 million people) died from the outbreak between 1348 and 1350. Contemporary observers, such as Jean Froissart, estimated the toll to be one-third—less an accurate assessment than an allusion to the Book of Revelation meant to suggest the scope of the plague. Many rural villages were depopulated, mostly the smaller communities, as the few survivors fled to larger towns and cities leaving behind abandoned villages. The Black Death hit the culture of towns and cities disproportionately hard, although rural areas (where most of the population lived) were also significantly affected. A few rural areas, such as Eastern Poland and Lithuania, had such low populations and were so isolated that the plague made little progress. Parts of Hungary and, in modern Belgium, the Brabant region, Hainaut, and Limbourg, as well as Santiago de Compostela, were unaffected for unknown reasons some historians have assumed that the presence of resistant blood groups in the local population helped them resist the disease, although these regions would be touched by the second plague outbreak in 1360–63 and later during the numerous resurgences of the plague). Other areas which escaped the plague were isolated mountainous regions (e.g. the Pyrenees). Larger cities were the worst off, as population densities and close living quarters made disease transmission easier. Cities were also strikingly filthy, infested with lice, fleas, and rats, and subject to diseases related to malnutrition and poor hygiene. According to journalist John Kelly, "[w]oefully inadequate sanitation made medieval urban Europe so disease-ridden, no city of any size could maintain its population without a constant influx of immigrants from the countryside".(p. 68) The influx of new citizens facilitated the movement of the plague between communities, and contributed to the longevity of the plague within larger communities.
In Italy, Florence's population was reduced from 110,000 or 120,000 inhabitants in 1338 to 50,000 in 1351. Between 60 to 70% of Hamburg and Bremen's population died. In Provence, Dauphiné, and Normandy, historians observe a decrease of 60% of fiscal hearths. In some regions, two thirds of the population was annihilated. In the town of Givry, in the Bourgogne region in France, the friar, who used to note 28 to 29 funerals a year, recorded 649 deaths in 1348, half of them in September. About half of Perpignan's population died in several months (only two of the eight physicians survived the plague). Over 60% of Norway's population died from 1348 to 1350. London may have lost two-thirds of its population during the 1348–49 outbreak. England lost 70% of its population, which declined from 7 million before the plague, to 2 million in 1400.
All social classes were affected, although the lower classes, living together in unhealthy places, were most vulnerable. Alfonso XI of Castile was the only European monarch to die of the plague, but Peter IV of Aragon lost his wife, his daughter, and a niece in six months. Joan of England, daughter of Edward III, died in Bordeaux on her way to Castile to marry Alfonso's son, Pedro. The Byzantine Emperor lost his son, while in the kingdom of France, Joan of Navarre, daughter of Louis X le Hutin and of Margaret of Burgundy, was killed by the plague, as well as Bonne of Luxembourg, the wife of the future John II of France.
Furthermore, resurgences of the plague in later years must also be counted: in 1360–62 (the "little mortality"), in 1366–69, 1374–75, 1400, 1407, etc. The plague was not eradicated until the 19th century.
The precise demographic impact of the disease in the Middle East is very difficult to calculate. Mortality was particularly high in rural areas, including significant areas of Gaza and Syria. Many rural people fled, leaving their fields and crops, and entire rural provinces are recorded as being totally depopulated.
Surviving records in some cities reveal a devastating number of deaths. The 1348 outbreak in Gaza left an estimated 10,000 people dead, while Aleppo recorded a death rate of 500 a day during the same year. In Damascus, at the disease's peak in September and October 1348, a thousand deaths were recorded every day, with overall mortality estimated at between 25 and 38 percent. Syria lost a total of 400,000 people by the time the epidemic subsided in March 1349. In contrast to some higher mortality estimates in Asia and Europe, scholars such as John Fields of Trinity College in Dublin believe the mortality rate in the Middle East was less than one-third of the total population, with higher rates in selected areas.
Social, environmental, and economic effects
Because fourteenth century healers were at a loss to explain the cause, Europeans turned to astrological forces, earthquakes, and the poisoning of wells by Jews as possible reasons for the plague's emergence. No one in the fourteenth century considered rat control a way to ward off the plague, and people began to believe only God's anger could produce such horrific displays. There were many attacks against Jewish communities. In February 1349, 2,000 Jews were murdered in Strasbourg. In August of the same year, the Jewish communities of Mainz and Cologne were exterminated.
Where government authorities were concerned, most monarchs instituted measures that prohibited exports of foodstuffs, condemned black market speculators, set price controls on grain, and outlawed large-scale fishing. At best, they proved mostly unenforceable. At worst, they contributed to a continent-wide downward spiral. The hardest hit lands, like England, were unable to buy grain abroad: from France because of the prohibition and from most of the rest of the grain producers because of crop failures from shortage of labour. Any grain that could be shipped was eventually taken by pirates or looters to be sold on the black market. Meanwhile, many of the largest countries, most notably England and Scotland, had been at war, using up much of their treasury and exacerbating inflation. In 1337, on the eve of the first wave of the Black Death, England and France went to war in what would become known as the Hundred Years' War. Malnutrition, poverty, disease and hunger, coupled with war, growing inflation and other economic concerns made Europe in the mid-fourteenth century ripe for tragedy.
Europe had been overpopulated before the plague, and a reduction of 30% to 50% of the population could have resulted in higher wages and more available land and food for peasants because of less competition for resources. In 1357, a third of property in London was unused due to a severe outbreak in 1348–49. However, for reasons that are still debated, population levels declined after the Black Death's first outbreak until around 1420 and did not begin to rise again until 1470, so the initial Black Death event on its own does not entirely provide a satisfactory explanation to this extended period of decline in prosperity. See Medieval demography for a more complete treatment of this issue and current theories on why improvements in living standards took longer to evolve.
Impact on peasants
The great population loss brought favourable results to the surviving peasants in England and Western Europe. There was increased social mobility, as depopulation further eroded the peasants' already weakened obligations to remain on their traditional holdings. Feudalism never recovered. Land was plentiful, wages high, and serfdom had all but disappeared. It was possible to move about and rise higher in life. Younger sons and women especially benefited. As population growth resumed however, the peasants again faced deprivation and famine.
In Eastern Europe, by contrast, renewed stringency of laws tied the remaining peasant population more tightly to the land than ever before through serfdom. Sparsely populated Eastern Europe was less affected by the Black Death and so peasant revolts were less common in the fourteenth and fifteenth centuries, not occurring in the east until the sixteenth through nineteenth centuries.
Furthermore, the plague's great population reduction brought cheaper land prices, more food for the average peasant, and a relatively large increase in per capita income among the peasantry, if not immediately, in the coming century. Since the plague left vast areas of farmland untended, they were made available for pasture and put more meat on the market; the consumption of meat and dairy products went up, as did the export of beef and butter from the Low Countries, Scandinavia and northern Germany. However, the upper class often attempted to stop these changes, initially in Western Europe, and more forcefully and successfully in Eastern Europe, by instituting sumptuary laws. These regulated what people (particularly of the peasant class) could wear, so that nobles could ensure that peasants did not begin to dress and act as a higher class member with their increased wealth. Another tactic was to fix prices and wages so that peasants could not demand more with increasing value. In England, Statute of Labourers 1351 was enforced, meaning no peasant could ask for more wages than in 1346 This was met with varying success depending on the amount of rebellion it inspired; such a law was one of the causes of the 1381 Peasants' Revolt in England.
The rapid development of the use was probably one of the consequences of the Black Death, during which many landowning nobility died, leaving their realty to their widows and minor orphans.
Impact on urban workers
In the wake of the drastic population decline brought on by the plague, wages shot up and laborers could move to new localities in response to wage offers. Local and royal authorities in Western Europe instituted wage controls. These governmental controls sought to freeze wages as the old levels before the Black Death. Within England, for example, the Ordinance of Labourers, enacted in 1349, and the Statute of Labourers, enacted in 1351, restricted both wage increases and the relocation of workers. If workers attempted to leave their current post, employers were given the right to have them imprisoned. The Statute was poorly enforced in most areas, and farm wages in England on average doubled between 1350 and 1450, although they were static thereafter until the end of the 19th century.
Cohn, comparing numerous countries, argues that these laws were not primarily designed to freeze wages. Instead, he says the energetic local and royal measures to control labor and artisans' prices was a response to elite fears of the greed and possible new powers of lesser classes that had gained new freedom. Cohn says the laws reflect the anxiety that followed the Black Death's new horrors of mass mortality and destruction, and from elite anxiety about manifestations such as the flagellant movement and the persecution of Jews, Catalans, and beggars.
By 1200, virtually all of the Mediterranean basin and most of northern Germany had been deforested and cultivated. Indigenous flora and fauna were replaced by domestic grasses and animals and domestic woodlands were lost. With depopulation, this process was reversed. Much of the primeval vegetation returned, and abandoned fields and pastures were reforested.
The Black Death encouraged innovation of labour-saving technologies, leading to higher productivity. There was a shift from grain farming to animal husbandry. Grain farming was very labor-intensive, but animal husbandry needed only a shepherd and a few dogs and pastureland.
Plague brought an eventual end of Serfdom in Western Europe. The manorial system was already in trouble, but the Black Death assured its demise throughout much of western and central Europe by 1500. Severe depopulation and migration of the village to cities caused an acute shortage of agricultural laborers. Many villages were abandoned. In England, more than 1300 villages were deserted between 1350 and 1500. Wages of labourers were high, but the rise in nominal wages following the Black Death was swamped by post-Plague inflation, so that real wages fell.
Labor was in such a short supply that Lords were forced to give better terms of tenure. This resulted in much lower rents in western Europe. By 1500, a new form of tenure called copyhold became prevalent in Europe. In copyhold, both a Lord and peasant made their best business deal, whereby the peasant got use of the land and the Lord got a fixed annual payment and both possessed a copy of the tenure agreement. Serfdom did not end everywhere. It lingered in parts of Western Europe and was introduced to Eastern Europe after the Black Death.
There was change in the inheritance law. Before the plague, only sons and especially the elder son inherited the ancestral property. Post plague all sons as well as daughters started inheriting property.
Renewed religious fervor and fanaticism came in the wake of the Black Death. Some Europeans targeted "groups such as Jews, friars, foreigners, beggars, pilgrims", lepers and Romani, thinking that they were to blame for the crisis.
Differences in cultural and lifestyle practices also led to persecution. As the plague swept across Europe in the mid-14th century, annihilating more than half the population, Jews were taken as scapegoats, in part because isolation in the ghettos meant in some places that Jews were less affected. Accusations spread that Jews had caused the disease by deliberately poisoning wells. European mobs attacked Jewish settlements across Europe; by 1351, 60 major and 150 smaller Jewish communities had been destroyed, and more than 350 separate massacres had occurred.
According to Joseph P. Byrne, women also faced persecution during the Black Death. Muslim women in Cairo became scapegoats when the plague struck. Byrne writes that in 1438, the sultan of Cairo was informed by his religious lawyers that the arrival of the plague was Allah's punishment for the sin of fornication and that in accordance with this theory, a law was set in place stating that women were not allowed to make public appearances as they may tempt men into sin. Byrne describes that this law was only lifted when "the wealthy complained that their female servants could not shop for food."
There was a significant impact on religion, as many believed the plague was God's punishment for sinful ways. The Church lands and buildings were unaffected, but there were too few priests left to maintain the old schedule of services. Over half the parish priests, who gave the final sacraments to the dying, themselves died. The Church moved to recruit replacements, but the process took time. New colleges were opened at established universities, and the training process sped up. The shortage of priests opened new opportunities for lay women to assume more extensive and more important service roles in the local parish.
Flagellants practiced self-flogging (whipping of oneself) to atone for sins. The movement became popular after the Black Death. It may be that the flagellants' later involvement in hedonism was an effort to accelerate or absorb God's wrath, to shorten the time with which others suffered. More likely, the focus of attention and popularity of their cause contributed to a sense that the world itself was ending and that their individual actions were of no consequence.
The Black Death hit the monasteries very hard because of their proximity with the sick who sought refuge there. This left a severe shortage of clergy after the epidemic cycle. Eventually the losses were replaced by hastily trained and inexperienced clergy members, many of whom knew little of the rigors of their predecessors. Reformers rarely pointed to failures on the part of the Church in dealing with the catastrophe.
The Black Death had a profound impact on art and literature. After 1350, European culture in general turned very morbid. The general mood was one of pessimism, and contemporary art turned dark with representations of death. The widespread image of the "dance of death" showed death (a skeleton) choosing victims at random. Many of the most graphic depictions come from writers such as Boccaccio and Petrarch. Peire Lunel de Montech, writing about 1348 in the lyric style long out of fashion, composed the following sorrowful sirventes "Meravilhar no·s devo pas las gens" during the height of the plague in Toulouse:
They died by the hundreds, both day and night, and all were thrown in ... ditches and covered with earth. And as soon as those ditches were filled, more were dug. And I, Agnolo di Tura ... buried my five children with my own hands ... And so many died that all believed it was the end of the world.— The Plague in Siena: An Italian Chronicle
How many valiant men, how many fair ladies, breakfast with their kinfolk and the same night supped with their ancestors in the next world! The condition of the people was pitiable to behold. They sickened by the thousands daily, and died unattended and without help. Many died in the open street, others dying in their houses, made it known by the stench of their rotting bodies. Consecrated churchyards did not suffice for the burial of the vast multitude of bodies, which were heaped by the hundreds in vast trenches, like goods in a ships hold and covered with a little earth.— Giovanni Boccaccio
The practice of alchemy as medicine, previously considered to be normal for most doctors, slowly began to wane as the citizenry began to realise that it seldom affected the progress of the epidemic and that some of the potions and "cures" used by many alchemists only served to worsen the condition of the sick. Distilled spirit, originally made by alchemists, was commonly applied as a remedy for the Black Death, and, as a result, the consumption of spirits in Europe rose dramatically after the plague.
The doctors visited victims to verify whether they had been afflicted or not. Surviving records of contracts drawn up between cities and plague doctors often gave the plague doctor enormous latitude and heavy financial compensation, given the risk of death involved for the plague doctor himself. Most plague doctors were essentially volunteers, as qualified doctors had (usually) already fled, knowing they could do nothing for those affected.
A plague doctor's clothing consisted of:
- A wide-brimmed black hat worn close to the head. At the time, a wide-brimmed black hat would have identified a person as a doctor, much the same as how nowadays a hat may identify chefs, soldiers, and workers. The wide-brimmed hat may have also been used as partial shielding from infection.
- A primitive gas mask in the shape of a bird's beak. A common belief at the time was that the plague was carried from place to place by birds. There may have been a belief that by dressing in a bird-like mask, the wearer could draw the plague away from the patient and onto the garment the plague doctor wore. The mask also included red glass eyepieces, which were thought to make the wearer impervious to evil. The beak of the mask was often filled with strongly aromatic herbs and spices to overpower the miasmas or "bad air" which was also thought to carry the plague. At the very least, it may have dulled the smell of unburied corpses and sputum from plague victims.
- A long, black overcoat. The overcoat worn by the plague doctor was tucked in behind the beak mask at the neckline to minimize skin exposure. It extended to the feet, and was often coated head to toe in suet or wax. A coating of suet may have been used with the thought that the plague could be drawn away from the flesh of the infected victim and either trapped by the suet, or repelled by the wax. The coating of wax likely served as protection against respiratory droplet contamination, but it was not known at the time if coughing carried the plague. It was likely that the overcoat was waxed to simply prevent sputum or other bodily fluids from clinging to it.
- A wooden cane. The cane was used to both direct family members to move the patient, other individuals nearby, and possibly to examine patients without directly touching them.
- Leather breeches. Similar to waders worn by fishermen, leather breeches were worn beneath the cloak to protect the legs and groin from infection. Since the plague often tended to manifest itself first in the lymph nodes, particular attention was paid to protecting the armpits, neck, and groin.
It is likely that while the plague doctor's clothing offered some protection to the wearer, the plague doctors themselves may have actually contributed more to the spreading of the disease than its treatment, in that the plague doctor unknowingly served as a vector for infected fleas to move from host to host.
Although the Black Death highlighted the shortcomings of medical science in the medieval era, it also led to positive changes in the field of medicine. As described by David Herlihy in The Black Death and the Transformation of the West, more emphasis was placed on “anatomical investigations” following the Black Death. How individuals studied the human body notably changed, becoming a process that dealt more directly with the human body in varied states of sickness and health. Further, at this time, the importance of surgeons became more evident.
A theory put forth by Stephen O'Brien says the Black Death is likely responsible, through natural selection, for the high frequency of the CCR5-Δ32 genetic defect in people of European descent. The gene affects T cell function and provides protection against HIV, smallpox, and possibly plague, though for the last, no explanation as to how it would do that exists. This, however, is now challenged, given that the CCR5-Δ32 gene has been found to be just as common in Bronze Age tissue samples.
The Black Death also inspired European architecture to move in two different directions: (1) a revival of Greco-Roman styles that, in stone and paint, expressed Petrarch's love of antiquity, and (2) a further elaboration of the Gothic style. Late medieval churches had impressive structures centered on verticality, where one's eye is drawn up towards the high ceiling. The basic Gothic style was revamped with elaborate decoration in the late medieval period. Sculptors in Italian city-states emulated the work of their Roman forefathers while sculptors in northern Europe, no doubt inspired by the devastation they had witnessed, gave way to a heightened expression of emotion and an emphasis on individual differences. A tough realism came forth in architecture as in literature. Images of intense sorrow, decaying corpses, and individuals with faults as well as virtues emerged. North of the Alps, painting reached a pinnacle of precise realism with Early Netherlandish painting by artists such as Jan van Eyck (c. 1390– by 1441). The natural world was reproduced in these works with meticulous detail whose realism was not unlike photography.
- Some of the impacts of the black death were that the black death destroyed families. once a family member was contagious, some left them stranded and fled the area. Austin Alchon, Suzanne (2003). A pest in the land: new world epidemics in a global perspective. University of New Mexico Press. p. 21. ISBN 0-8263-2871-7.
- Barbara A. Hanawalt, "Centuries of Transition: England in the Later Middle Ages," in Richard Schlatter, ed., Recent Views on British History: Essays on Historical Writing since 1966 (Rutgers UP, 1984), pp 43–44, 58
- R. H. Hilton, The English Peasantry in the Late Middle Ages (Oxford: Clarendon, 1974)
- Dunham, Will (29 January 2008). "Black death 'discriminated' between victims". Australian Broadcasting Corporation. Retrieved 2008-11-03.
- "De-coding the Black Death". BBC News. 3 October 2001. Retrieved 2008-11-03.
- Philipkoski, Kristen (3 October 2001). "Black Death's Gene Code Cracked". Wired. Retrieved 2008-11-03.
- Philip Daileader, The Late Middle Ages, audio/video course produced by The Teaching Company, 2007. ISBN 978-1-59803-345-8.
- Spengler, Joseph J. (October 1962). "Review (Studies on the Population of China, 1368–1953 by Ping-Ti Ho)". Comparative Studies in Society and History 5 (1): 112–114. doi:10.1017/s0010417500001547. JSTOR 177771.
- Maguire, Michael (22 February 1999). "Re: How many people recovered from Black Death (Bubonic Plague)". MadSci Network. ID: 918741314.Mi. Retrieved 2008-11-03.
- King, Jonathan (2005-01-08). "World's long dance with death". The Sydney Morning Herald. Retrieved 2008-11-03.
- Stéphane Barry and Norbert Gualde, in L'Histoire n° 310, June 2006, pp.45–46, say "between one-third and two-thirds"; Robert Gottfried (1983). "Black Death" in Dictionary of the Middle Ages, volume 2, pp.257–67, says "between 25 and 45 percent".
- Gottfried, Robert S. (1983). The Black Death. New York: The Free Press
- Jean Froissart, Chronicles (trans. Geoffrey Brereton, Penguin, 1968, corrections 1974) pp.111
- Joseph Patrick Byrne (2004). The Black Death. ISBN 0-313-32492-1, p. 64.
- Stéphane Barry and Norbert Gualde, "The Biggest Epidemic of History" (La plus grande épidémie de l'histoire, in L'Histoire n°310, June 2006, pp.45–46
- Harald Aastorp (2004-08-01). "Svartedauden enda verre enn antatt". Forskning.no. Retrieved 2009-01-03.
- Kennedy, Maev (17 August 2011). "Black Death study lets rats off the hook". The Guardian. Retrieved 18 August 2011.
- Barry and Gualde 2006.
- Judith M. Bennett and C. Warren Hollister (2006). Medieval Europe: A Short History. New York: McGraw-Hill. p. 329. ISBN 0-07-295515-5. OCLC 56615921.
- Bennett and Hollister, 329–330.
- Jay O'Brien; William Roseberry (1991). Golden Ages, Dark Ages: Imagining the Past in Anthropology and History. U. of California Press. p. 25.
- Barbara A. Hanawalt, "centuries of Transition: England in the Later Middle Ages," in Richard Schlatter, ed., Recent Views on British History: Essays on Historical Writing since 1966 (Rutgers UP, 1984), pp 43–44, 58
- Munro, John H. A. (5 March 2005). "Before and After the Black Death: Money, Prices, and Wages in Fourteenth-Century England". https://ideas.repec.org/. Retrieved 5 August 2014. External link in
- Penn, Simon A. C.; Dyer, Christopher. "Wages and Earnings in Late Medieval England: Evidence from the Enforcement of the Labour Laws". The Economic History Review 43 (3): 356–357. doi:10.1111/j.1468-0289.1990.tb00535.x.
- Gregory Clark, "The long march of history: Farm wages, population, and economic growth, England 1209–1869," Economic History Review 60.1 (2007): 97–135. online, page 36
- Samuel Cohn, "After the Black Death: Labour Legislation and Attitudes Towards Labour in Late-Medieval Western Europe," Economic History Review (2007) 60#3 pp. 457–485 in JSTOR
- Gottfried, Robert S. (1983). "7". The black death: natural and human disaster in Medieval Europe (1. Free Press paperback ed.). New York: Free Press. ISBN 0-02-912630-4.
- "Plagued by dear labour". London: The Economist. 21 October 2013. Retrieved 5 August 2014.
- David Nirenberg, Communities of Violence, 1998, ISBN 0-691-05889-X.
- R.I. Moore The Formation of a Persecuting Society, Oxford, 1987 ISBN 0-631-17145-2
- Naomi E. Pasachoff, Robert J. Littman A Concise History Of The Jewish People 2005 – Page 154 "However, Jews regularly ritually washed and bathed, and their abodes were slightly cleaner than their Christian neighbors'. Consequently, when the rat and the flea brought the Black Death, Jews, with better hygiene, suffered less severely ..."
- Joseph P Byrne, Encyclopedia of the Black Death Volume 1 2012 – Page 15 "Anti–Semitism and Anti–Jewish Violence before the Black Death .. Their attention to personal hygiene and diet, their forms of worship, and cycles of holidays were off-puttingly different."
- Anna Foa The Jews of Europe After the Black Death 2000 Page 146 "There were several reasons for this, including, it has been suggested, the observance of laws of hygiene tied to ritual practices and a lower incidence of alcoholism and venereal disease"
- Richard S. Levy Antisemitism 2005 Page 763 "Panic emerged again during the scourge of the Black Death in 1348, when widespread terror prompted a revival of the well poisoning charge. In areas where Jews appeared to die of the plague in fewer numbers than Christians, possibly because of better hygiene and greater isolation, lower mortality rates provided evidence of Jewish guilt."
- Joseph P. Byrne, The Black Death (Westport, Conn.: Greenwood Press, 2004), 108.
- Kirk R. MacGregor, A Comparative Study of Adjustments to Social Catastrophes in Christianity and Buddhism. The Black Death in Europe and the Kamakura Takeover in Japan As Causes of Religious Reform (2011)
- Steven A. Epstein, An Economic and Social History of Later Medieval Europe, 1000–1500 (2009) p 182
- Katherine L. French, The Good Women of the Parish: Gender and Religion After the Black Death (U of Pennsylvania Press, 2011)
- Epstein, p 182
- J. M. Bennett and C. W. Hollister, Medieval Europe: A Short History (New York: McGraw-Hill, 2006), p. 372.
- "Plague readings". University of Arizona. Retrieved 3 November 2008.
- Quotes from the Plague
- David Herlihy, The Black Death and the Transformation of the West (Cambridge, Mass.: Harvard University Press, 1997), 72.
- Jefferys, Richard; Anne-christine d'Adesky (March 1999). "Designer Genes". HIV Plus (3). ISSN 1522-3086. Archived from the original on 14 February 2008. Retrieved 2006-12-12.
- Philip W. Hedrick; Brian C. Verrelli (June 2006). "'Ground truth' for selection on CCR5-Δ32". Trends in Genetics 22 (6): 293–6. doi:10.1016/j.tig.2006.04.007. PMID 16678299.
- Bennett and Hollister, p. 374.
- Bennett and Hollister, p. 375.
- Bennett and Hollister, p. 376. | https://en.wikipedia.org/wiki/Consequences_of_the_Black_Death |
4.09375 | What's at the edge of the Universe?
This image is of the farthest away objects in the Universe. It was taken by the Hubble Space Telescope. The image looks back about 13 billion years. The galaxies in this image were all very young. It has just taken 13 billion years for their light to reach us.
The study of the Universe is called cosmology. Cosmologists study the structure and changes in the present Universe. The Universe contains all of the star systems, galaxies, gas, and dust, plus all the matter and energy that exists now. Plus all that existed in the past, and all that will exist in the future. The Universe includes all of space and time.
Human Understanding of the Universe
What did the ancient Greeks recognize as the Universe? Their Universe had Earth at the center, the Sun, the Moon, five planets, and a sphere to which all the stars were attached. This idea held for many centuries. Galileo and his telescope helped people recognize that Earth is not the center of the Universe. They also found out that there are many more stars than were visible to the naked eye. All of those stars were in the Milky Way Galaxy.
In the early 20th century, an astronomer named Edwin Hubble (Figure below) discovered something amazing. He showed that the Andromeda Nebula was over 2 million light years away. This was many times farther than the farthest distances that had ever been measured. Hubble realized that many of the objects that astronomers called nebulas were not actually clouds of gas. They were collections of millions or billions of stars. We now call these features galaxies.
(a) Edwin Hubble used the 100-inch reflecting telescope at the Mount Wilson Observatory in California to show that some distant specks of light were galaxies. (b) Hubble’s namesake space telescope spotted this six galaxy group. Edwin Hubble demonstrated the existence of galaxies.
Hubble showed that the Universe was much larger than our own galaxy. Today, we know that the Universe contains about a hundred billion galaxies. This is about the same number of galaxies as there are stars in the Milky Way Galaxy.
- The Universe contains about a hundred billion galaxies.
- The idea of a Universe has changed through human history.
- Edwin Hubble determined that there was much more to space than our own galaxy.
Use the resources below to answer the questions that follow.
- What is cosmology?
- What is the Universe?
- Why can't we observe everything in the Universe?
- What are we actually looking at when we observe the cosmos? Why?
- What is happening to our Universe?
- What is the Universe filled with? What does that give us clues about?
- What do cosmologists study?
- What was the importance of Edwin Hubble seeing that the Andromeda nebula was actually a galaxy?
- What makes up the Universe? | http://www.ck12.org/book/CK-12-Earth-Science-Concepts-For-Middle-School/section/14.12/ |
4.0625 | Wars of the Three Kingdoms
|This article needs additional citations for verification. (March 2008)|
The Wars of the Three Kingdoms formed an intertwined series of conflicts that took place in England, Ireland and Scotland between 1639 and 1651. The English Civil War has become the best-known of these conflicts and included the execution of the kingdoms' monarch, Charles I, by the English parliament in 1649.
The term "Wars of the Three Kingdoms" is often extended to include the uprisings and conflicts that continued through the 1650s until the English Restoration of the monarchy with Charles II, in 1660, and sometimes until Venner's uprising the following year. The wars were the outcome of tensions over religious and civil issues. Religious disputes centered on whether religion was to be dictated by the monarch or the choice of the individual, with many people feeling that they ought to have freedom of religion. The related civil questions were to what extent the king's rule was constrained by parliaments—in particular his right to raise taxes and armed forces without consent. Furthermore, the wars also had an element of national conflict, as Ireland and Scotland rebelled against England's primacy within the Three Kingdoms. The victory of the English Parliament—ultimately under Oliver Cromwell—over the king, the Irish and the Scots helped to determine the future of Great Britain and Ireland as a constitutional monarchy with political power centered on London. The Wars of the Three Kingdoms also paralleled a number of similar conflicts at the same time in Europe, such as the Fronde in France and the rebellions of the Netherlands and Portugal against Spanish rule.
The wars included the Bishops' Wars of 1639 and 1640, the Scottish Civil War of 1644–45; the Irish Rebellion of 1641, Confederate Ireland, 1642–49 and the Cromwellian conquest of Ireland in 1649 (collectively the Eleven Years War or Irish Confederate Wars); and the First, Second and Third English Civil Wars of 1642–46, 1648–49 and 1650–51.
Although the term is not new, having been used by James Heath in his book A Brief Chronicle of all the Chief Actions so fatally Falling out in the three Kingdoms, first published in 1662, recent publications' tendency to name these linked conflicts the Wars of the Three Kingdoms represents a trend by modern historians aiming to take a unified overview rather than treating some of the conflicts as mere background to the English Civil War. Some, such as Carlton and Gaunt have labelled them the British Civil Wars.
Since 1541, monarchs of England had also styled their Irish territory as a Kingdom (ruled with the assistance of a separate Irish Parliament), while Wales became more closely integrated into the Kingdom of England under Henry VIII. Scotland, the third separate kingdom, was governed by the House of Stuart.
With the English Reformation, King Henry VIII made himself head of the Protestant Church of England and outlawed Catholicism in England and Wales. In the course of the 16th century Protestantism became intimately associated with national identity in England: English folk in general saw Catholicism as the national enemy, especially as embodied in France and Spain. However, Catholicism remained the religion of most people in Ireland and was for many a symbol of native resistance to the Tudor conquest of Ireland in the 16th century.
In the Kingdom of Scotland the Protestant Reformation was a popular movement led by John Knox. The Scottish Parliament legislated for a national Presbyterian church, the Church of Scotland or "Kirk", and the Catholic Mary, Queen of Scots, was forced to abdicate in favour of her son James VI of Scotland. James grew up under a regency disputed between Catholic and Protestant factions, then took power and aspired to be a "universal King" favouring the English Episcopalian system of bishops appointed by the king. In 1584, he introduced bishops, but met vigorous opposition and had to concede that the General Assembly running the church should continue to do so.
The personal union of the three kingdoms under one monarch came about when King James VI of Scotland succeeded Elizabeth to the English throne in 1603. When Charles I succeeded his father, he had three main concerns regarding England and Wales: how to fund his government, how to limit parliament's interference in his rule and how to reform the church. He showed little interest in his other two kingdoms, Scotland and Ireland.
Religious confrontation in Scotland
James VI remained Protestant, taking care to maintain his hopes of succession to the English throne. He duly also became James I of England in 1603 and moved to London. His diplomatic and political skills now concentrated fully in dealing with the English Court and Parliament at the same time as running Scotland by writing to the Privy Council of Scotland and controlling the Parliament of Scotland through the Lords of the Articles. He stopped the Scottish General Assembly from meeting, then increased the number of Scottish bishops, and in 1618, held a General Assembly and pushed through Five Articles of Episcopalian practices which were widely boycotted. In 1625, he was succeeded by his son Charles I who was less skillful or restrained and was crowned in St Giles Cathedral, Edinburgh, in 1633 with full Anglican rites. Opposition to his attempts to enforce Anglican practices reached a flashpoint when he introduced a Book of Common Prayer. Charles' confrontation with the Scots came to a head in 1639, when Charles tried and failed to coerce Scotland by military means.
Charles shared his father's belief in the Divine Right of Kings, and his assertion of this led to a serious breach between the Crown and the English Parliament. While the Church of England remained dominant, a powerful Puritan minority, represented by around one third of the members of Parliament, had much in common with the Presbyterian Scots.
The English Parliament also had repeated disputes with the king over such subjects as taxation, military expenditure and the role of parliament in government. While James I had held the same opinions as his son with regard to royal prerogatives, he had enough charisma to persuade the Parliament to accept his policies. Charles did not have this skill in human management and so, when faced with a crisis in 1639–42, he failed to prevent his Kingdoms from sliding into civil war. When Charles approached the Parliament to pay for a campaign against the Scots, they refused, declared themselves to be permanently in session and put forward a long list of civil and religious grievances that Charles would have to remedy before they approved any new legislation.
Meanwhile, in the Kingdom of Ireland (proclaimed such in 1541 but only fully conquered for the Crown in 1603), tensions had also begun to mount. Charles I's Lord Deputy there, Thomas Wentworth, had antagonised the native Irish Catholics by repeated initiatives to confiscate their lands and grant them to English colonists. He had also angered Roman Catholics by enforcing new taxes but denying them full rights as subjects. This situation became explosive in 1639 when Wentworth offered the Irish Catholics the reforms they had desired in return for them raising and paying for an Irish army to put down the Scottish rebellion. Although plans called for an army with Protestant officers, the idea of an Irish Catholic army enforcing what many saw as tyrannical government horrified both the Scottish and the English Parliaments, who in response threatened to invade Ireland.
Modern historians have emphasised the lack of the inevitability of the civil wars, pointing out that all sides resorted to violence in a situation marked by mutual distrust and paranoia. Charles' initial failure to bring the Bishops' Wars to a quick end also made other discontented groups feel that force could serve to get what they wanted.
Alienated by English Protestant domination and frightened by the rhetoric of the English and Scottish Parliaments, a small group of Irish conspirators launched the Irish Rebellion of 1641, ostensibly in support of the "King's Rights". The rising featured widespread assaults on the Protestant communities in Ireland, sometimes culminating in massacres. Rumours spread in England and Scotland that the killings had the king's sanction and that this foreshadowed their own fate if the king's Irish troops landed in Britain. As a result, the English Parliament refused to pay for a royal army to put down the rebellion in Ireland and instead raised their own armed forces. The king did likewise, rallying those Royalists (some of them members of Parliament) who believed that loyalty to the legitimate king outweighed other important political principles.
The English Civil War broke out in 1642. The Scottish Covenanters (as the Presbyterians called themselves) sided with the English Parliament, joined the war in 1643 and played a major role in the English Parliamentary victory. The king's forces found themselves ground down by the efficiency of Parliament's New Model Army—backed by the financial muscle of the City of London. Charles I surrendered to the Scottish army encamped at Southwell and besieging Newark-on-Trent on 5 May 1646. What remained of the English and Welsh Royalist armies and garrisons surrendered piecemeal over the next few months.
In Ireland, the rebel Irish Catholics formed their own government—Confederate Ireland—with the intention of helping the Royalists in return for religious toleration and political autonomy. Troops from England and Scotland fought in Ireland and Irish Confederate troops mounted an expedition to Scotland in 1644, sparking the Scottish Civil War. In Scotland, the Royalists had a series of victories in 1644–45, but were crushed with the end of the first English Civil War and the return of the main Covenanter armies to Scotland.
Charles I was handed over to the English by the Scots when they returned to Scotland as part of the conditions for the English Parliament paying the Scots a large sum of money to help pay for the cost of their English campaign. From his surrender until the outbreak of the Second Civil War the Scots, the Presbyterians in the English Parliament and the Grandees of the New Model Army all negotiated with Charles and with each other to try to reach an accommodation. The breach between the New Model Army and Parliament widened day by day until finally the Presbyterian party, combined with the Scots and the remaining Royalists, felt itself strong enough to begin a Second English Civil War.
The New Model Army vanquished the English Royalists as well as their Scottish Engager allies. Subsequently the Grandees and their civilian supporters were unable to reconcile themselves with king or the Presbyterian majority in Parliament and used soldiers under the command of Colonel Pride to purge the English Parliament of those who opposed their polices. The Rump of the Long Parliament then passed enabling legislation for the trial of Charles I, who was found guilty of treason against the English commons and was executed on 30 January 1649.
After the execution of King Charles I the Rump Parliament passed a series of acts making England a republic with the House of Commons (sitting without the House of Lords) as the legislature and a Council of State as the executive power. In the other two kingdoms the execution of King Charles I caused the warring parties in those two kingdoms to unite and recognise Charles II as king of Great Britain, France and Ireland.
To deal with the threat that the two kingdoms posed to the English Commonwealth, the Rump Parliament sent a parliamentary army under Cromwell to invade and subdue Ireland. Cromwell and his army proceeded to do this. At the end of May 1650 Cromwell left Ireland (leaving the English army in Ireland to continue the conquest) and returned to England to take command of an English army which shortly afterwards invaded Scotland and defeated a Covenanter army at the Battle of Dunbar on 3 September 1650. His army then proceeded to occupy Edinburgh and the rest of Scotland south of the Forth. Whilst Cromwell advanced with the bulk of his army over the Forth towards Stirling, a Scottish Royalist army under the command of Charles II stole the march on Cromwell and invaded England. Cromwell divided his army, leaving some in Scotland to continue the conquest and led the rest south in pursuit.
The Royalist army failed to gather much support from English Royalists; so, instead of heading straight for London and certain defeat, Charles went to Worcester in the hope that the West of England and Wales would rise up against the Commonwealth. This did not happen and a year to the day since the Battle of Dunbar the New Model Army with support from English militia regiments won the Battle of Worcester vanquishing a predominately Scottish Royalist army. This was the last and most decisive victory in the Wars of the Three Kingdoms.
Following the defeat of all the opponents of the English Parliamentary New Model Army, the Grandees of the army and their civilian supporters dominated the politics of all three nations for the next nine years (see Interregnum (1649–1660)). The Rump Parliament had decreed that England was a Commonwealth, and although Ireland and Scotland were ruled by military governors, representatives of constituencies in Ireland and Scotland sat in the English parliaments of the Protectorate. With the death of the Lord Protector Oliver Cromwell in 1658, the Commonwealth fell into a period of instability. It ended in 1660 when the English army occupying Scotland marched south under the command of General George Monck, seized control of London, and, with the agreement of the English civilian establishment, invited Charles II to return to the Three Kingdoms as king (an event known as the Restoration).
While the Wars of the Three Kingdoms pre-figured many of the changes that would shape modern Britain, in the short term they resolved little. The English Commonwealth did achieve a compromise (though a relatively unstable one) between a monarchy and a republic. In practice, Oliver Cromwell exercised political power because of his control over the Parliament's military forces, but his legal position remained unclear, even when he became Lord Protector. None of the several proposed constitutions ever came into effect. Thus the Commonwealth and the Protectorate established by the victorious Parliamentarians left little behind it in the way of new forms of government. Two important legacies remain from this period:
- after the execution of King Charles I for high treason, no future British monarch could expect that his subjects would tolerate perceived despotism;
- the excesses of New Model Army, particularly that of the Rule of the Major-Generals, left an abiding mistrust of military rule in England.
English Protestants experienced religious freedom during the Interregnum, but not English Roman Catholics. The new authorities abolished the Church of England and the House of Lords. Cromwell dismissed the Rump Parliament and failed to create an acceptable alternative. Nor did Cromwell and his supporters move in the direction of a popular democracy, as the more radical fringes of the Parliamentarians (such as the Levellers) wanted.
The New Model Army occupied Ireland and Scotland during the Interregnum. In Ireland, the new government confiscated almost all lands belonging to Irish Catholics as punishment for the rebellion of 1641; harsh Penal Laws also restricted this community. Thousands of Parliamentarian soldiers settled in Ireland on confiscated lands. The Commonwealth abolished the Parliaments of Ireland and Scotland. In theory, these countries had representation in the English Parliament, but since this body never received real powers, such representation remained ineffective. When Cromwell died in 1658 the Commonwealth fell apart without major violence, and Charles II returned as King of England, Scotland and Ireland in 1660.
Under the English Restoration, the political system returned to the constitutional position of before the wars. The new régime executed or imprisoned for life those responsible for the regicide of Charles I. Royalists dug up Cromwell's corpse and gave it a posthumous execution. The religious and political radicals who were held responsible for the wars suffered harsh repression. Scotland and Ireland regained their Parliaments, some Irish retrieved confiscated lands, and the New Model Army disbanded. However, the issues that had caused the wars—religion, the power of Parliament and the relationship between the three kingdoms—remained unresolved, only postponed to re-emerge as matters fought over again in the Glorious Revolution of 1688. Only after this point did the features of modern Britain seen in the civil wars emerge permanently: a Protestant constitutional monarchy with England dominant, and a strong standing army.
- "ENGLISH CIVIL WARS". History.com. Retrieved 4 October 2014.
- Second and third English Civil Wars, "While it is notoriously difficult to determine the number of casualties in any war, it has been estimated that the conflict in England and Wales claimed about 85,000 lives in combat, with a further 127,000 noncombat deaths (including some 40,000 civilians)."
- Ian Gentles, citing John Morrill, states, "there is no stable, agreed title for the events.... They have been variously labeled the Great Rebellion, the Puritan Revolution, the English Civil War, the English Revolution and most recently, the Wars of the Three Kingdoms." (Gentles 2007, p. 3)
- Raymond 2005, p. 281.
- Carlton 1994.
- Gaunt 1997.
- Trevor Royle published his 2004 book under different titles. In the UK it was called Civil War: The Wars of the Three Kingdoms while in the US it was called The British Civil War: The Wars of the Three Kingdoms, 1638–1660 (Royle 2004).
- "Charles I and the eleven years’ personal rule in England and Wales", Open University
- Atkinson 1911, pp. 403–417.
- One or more of the preceding sentences incorporates text from a publication now in the public domain: Atkinson 1911, p. 417
- Atkinson 1911, pp. 417–418.
- Atkinson 1911, pp. 418–420.
- Atkinson 1911, pp. 420–421.
- Jane 1905, pp. 376–377.
- "Around the rule of the Major-Generals there has grown a legend of military oppression which obscures the limits both of their impact and of their unpopularity" (Worden 1986, p. 134)
- Atkinson, Charles Francis (1911), "Great Rebellion", in Chisholm, Hugh, Encyclopædia Britannica 12 (11th ed.), Cambridge University Press, pp. 403–421
- Carlton, Charles (1994) , Going to the wars: the experience of the British civil wars, 1638–1651, Routledge, ISBN 0-415-10391-6.
- Gentles, Ian (2007), "The English Revolution and the Wars in the Three Kingdoms, 1638–1652", in Scott, H. M.; Collins, B. W., Modern Wars in Perspective, Harlow, UK: Pearson Longman
- Gaunt, Peter (1997), The British Wars 1637–1651, UK: Routledge, ISBN 0-415-12966-4. An 88 page pamphlet.
- Jane, Lionel Cecil (1905), The coming of Parliament; England from 1350 to 1660, New York: G.P. Putnam's Sons etc, pp. 376–377
- Raymond, Joad (2005), The invention of the newspaper: English newsbooks, 1641–1649, Oxford University Press, p. 281, ISBN 9780199282340
- Royle, Trevor (2004), Civil War: The Wars of the Three Kingdoms, UK: Little Brown, ISBN 0-316-86125-1 alternatively The British Civil War: The Wars of the Three Kingdoms, 1638–1660, USA: Palgrave Macmillan, 2004, ISBN 0-312-29293-7
- Worden, Blair (1986), Stuart England (illustrated ed.), Phaidon
- Bennett, Martyn (1997). The Civil Wars in Britain and Ireland, 1638–1651. Oxford: Blackwell. ISBN 0-631-19154-2.
- Bennett, Martyn (2000). The Civil Wars Experienced: Britain and Ireland, 1638–1661. Oxford: Routledge. ISBN 0-415-15901-6.
- Kenyon, John; and Jane Ohlmeyer (eds.) (1998). The Civil Wars: A Military History of England, Scotland, and Ireland, 1638–1660. Oxford: Oxford University Press. ISBN 0-19-866222-X. Cite uses deprecated parameter
- Russell, Conrad (1991). The Fall of the British Monarchies, 1637–1642. Oxford: Clarendon Press. ISBN 0-19-822754-X.
- Stevenson, David (1981). Scottish Covenanters and Irish Confederates: Scottish-Irish Relations in the Mid-Seventeenth Century. Belfast: Ulster Historical Foundation. ISBN 0-901905-24-0.
- Young, John R. (ed.) (1997). Celtic Dimensions of the British Civil Wars. Edinburgh: John Donald. ISBN 0-85976-452-4.
- Aylmer, G. E. (1986). Rebellion or Revolution?: England, 1640–1660. Oxford: Oxford University Press. ISBN 0-19-219179-9.
- Hill, Christopher (1972). The World Turned Upside Down: Radical Ideas During the English Revolution. London: Temple Smith. ISBN 0-85117-025-0.
- Morrill, John (ed.) (1991). The Impact of the English Civil War. London: Collins & Brown. ISBN 1-85585-042-7.
- Woolrych, Austin (2000) . Battles of the English Civil War. London: Phoenix Press. ISBN 1-84212-175-8.
- Worden, Blair (2009). The English Civil Wars: 1640–1660. London: W&N. ISBN 978-0297848882.
- Lenihan, Pádraig (2000). Confederate Catholics at War, 1641–1649. Cork: Cork University Press. ISBN 1-85918-244-5.
- Ó hAnnracháin, Tadhg (2002). Catholic Reformation in Ireland: The Mission of Rinuccini, 1645–1649. Oxford: Oxford University Press. ISBN 0-19-820891-X.
- Ó Siochrú, Micheál (1999). Confederate Ireland, 1642–1649: A Constitutional and Political Analysis. Dublin: Four Courts Press. ISBN 1-85182-400-6.
- Ó Siochrú, Micheál (ed.) (2001). Kingdoms in Crisis: Ireland in the 1640s. Dublin: Four Courts Press. ISBN 1-85182-535-5.
- Perceval-Maxwell, M. (1994). The Outbreak of the Irish Rebellion of 1641. Dublin: Gill & Macmillan. ISBN 0-7171-2173-9.
- Wheeler, James Scott (1999). Cromwell in Ireland. Dublin: Gill & Macmillan. ISBN 0-7171-2884-9.
- Stevenson, David (1973). The Scottish Revolution, 1637–1644: The Triumph of the Covenanters. Newton Abbot: David & Charles. ISBN 0-7153-6302-6.
- Stevenson, David (1980). Alasdair MacColla and the Highland Problem in the Seventeenth Century. Edinburgh: John Donald. ISBN 0-85976-055-3.
- www.british-civil-wars.co.uk Extensive site on the Wars of the Three Kingdoms
- Chronology of The Wars of the Three Kingdoms
- The Wars of the Three Kingdoms Article by Jane Ohlmeyer arguing that the English Civil War was just one of an interlocking set of conflicts that encompassed the British Isles in the mid-17th century
- The English Context of the British Civil Wars (Link inaccessible as of 2008-03-02.) John Adamson argues that historians have exaggerated the importance of the Celtic countries in the events of the 1640s
- Englishcivilwar.org News, comment and discussion about the English Civil War
- The first Scottish Civil War
- The Rebellion of 1641 and the Cromwellian Occupation of Ireland
- Ireland and the War of the Three Kingdoms
- Civil War | https://en.wikipedia.org/wiki/Wars_of_the_Three_Kingdoms |
4.125 | domain name system - Computer Definition
A hierarchical system of naming hosts and placing the TCP/IP hosts into categories. The DNS is a way of translating numerical Internet addresses into word strings to computer and network names. For example, the host name rs.internic.net is also known as 22.214.171.124.
Any machine on the Internet has its own address, called the Internet Protocol Address (IP Address). The IP address looks something like this: 126.96.36.199—four numerical segments with a value range between 0 and 255 (one byte) separated by dots. Any computer is reachable through its IP address.
Because users cannot remember these numerical strings of IP addresses, an alternative system was needed. For this reason, IP addresses were translated into more logical text strings for humans to remember, such as cs.yale.edu—which means computer science department at Yale University, a U.S. educational institution.
During ARPANET’s development, one file called host.txt existed, and it was here that all IP addresses were listed. At the end of each day, all computers connected to the Internet would get the list from a central server where it was kept. With time, the number of connected hosts increased to such a degree that the size of the host file was huge and the system became inefficient. Thus, the DNS (Domain Name System) was invented—a hierarchical domain-based structure in which the Internet is divided into pieces called “domains.” The pieces are categorized as top-level domains and sub-domains. The top-level domains include generic and country domains.
The generic domains are com (a commercial enterprise), edu (an educational institution), gov (a government agency), int (an international institution), mil (the military institutions), net (a network institution), and org (a nonprofit organization).
The country domains, allocated one per country, look like this: au for Australia, ca for Canada, uk for the United Kingdom, and us for the United States. The details are defined in ISO 3166.
Each top-level domain is divided into several sub-domains, with each domain having control over its own sub-domains. For example, the edu domain covers all of the educational institutions or sub-domains—such as Yale University, Princeton University, Rutgers University, and Harvard University. Moreover, the country domains have sub-domains. For example, the uk (the United Kingdom) and the jp (Japan) domains have two common sub-domains: ac (which stands for academic) and com (which stands for commercial). Each domain has a particular server with a table containing all IP addresses and domain names belonging to its domain.
An organization called the Internic maintains a database having all registered domains for the world. Anyone can query its database by means of whois. Although several organizations maintain whois databases, the Internic has the main database. Any company, institution, or organization wanting to have its own domain name has to register it with the Internic or one of the other registries.
Many whois servers exist around the globe. For example, in Amsterdam, there is the European whois server at RIPE (Reseaux IP Europeans).
During the week of March 7, 2005, cyber scam artists manipulated the Internet’s directory service and capitalized on a hole in Symantec Corporation’s Gateway Security Appliance and Enterprise Firewall products to trick Internet users into installing adware and other programs on their computers. These DNS “poisoning attacks” caused Web browsers pointed at Google.com, eBay.com, and Weather.com, for example, to go to malicious Web pages that installed undesirable programs.
In such “poisoning attacks,” malicious crackers use a DNS server they control to transmit erroneous addresses to other DNS servers. Thus, users relying on a poisoned DNS server to manage their requests may discover that entering the URL of a popular Website sends them to some other unexpected and likely malicious Web page. Besides being a nuisance, DNS poisoning could be a tool for conducting online identity theft. Cybercriminals could, in fact, construct phishing Websites identical to popular sites such as Google and eBay to secretly capture online users’ personal data.
Internet Highway, LLC. Internet Terminology: Domain Name System. [Online, 1999.] Internet Highway, LLC Website. http://www.ihwy.com/support/netterms.html; Roberts, P. Scammers Use Symantec, DNS Holes to Push Adware. [Online, March 7, 2005.] Computerworld Inc. Website. http://www.computerworld.com/securitytopics/security/story/ 0,10801,100248,00.html. | http://computer.yourdictionary.com/domain-name-system |
4 | Seismologists are particularly interested in the stretch of the San Andreas Fault shown in the new composite image at right. Just north of this region in 1857, one of the largest earthquakes ever recorded in U.S. history erupted, measuring an estimated 8.0 in magnitude on the Richter scale. The quake shook buildings in Los Angeles, rattled Las Vegas and ruptured the ground's surface along 350 kilometers of the fault, which itself runs for 1,200 kilometers. To better visualize the region, scientists married an enhanced, true-color Landsat satellite image with data from NASA's Shuttle Radar Topography Mission (SRTM), doubling the elevation of prominant features for better perspective. The result looks southeast towards the snowcapped peak of Mt. Pios. The fault is the linear feature running along the base of mountains in the Temblor Range near Bakersfield. Parts of the agriculturally rich San Joaquin Valley appear to its left, and the Carrizo Plain is on its right. The three-dimensional measurements of the earth's surface came from the space shuttle Endeavour's flight last February. During its 10 days in orbit, SRTM gathered similar data for 80 percent of the earth's landmass, generating the most extensive high-resolution collection of information about our planet's topography yet. | http://www.scientificamerican.com/article/new-look-at-the-san-andre/ |
4.3125 | Come enjoy the Academy for free this Sunday, February 7.
What kept Darwin up at night? The Cambrian explosion.
The period on our planet between 540 and 520 million years ago when most modern animal groups appeared is also known as evolution’s Big Bang. Prior to the Cambrian explosion, life was much simpler on Earth—single-celled organisms dominated the landscape.
But how did so many different organisms develop in such a short period of time? “The abrupt appearance of dozens of animal groups during this time is arguably the most important evolutionary event after the origin of life,” says Michael Lee of the University of Adelaide. “Darwin himself famously considered that this was at odds with the normal evolutionary processes.”
Lee and his colleagues decided to look into “Darwin’s dilemma,” focusing on arthropods (insects, crustaceans, arachnids and their relatives), the most diverse animal group in both the Cambrian period and present day.
“It was during this Cambrian period that many of the most familiar traits associated with this group of animals evolved, like a hard exoskeleton, jointed legs, and compound (multi-faceted) eyes that are shared by all arthropods,” explains team member Greg Edgecombe of the Natural History Museum of London. “We even find the first appearance in the fossil record of the antenna that insects, millipedes and lobsters all have, and the earliest biting jaws.”
The team quantified the anatomical and genetic differences between living animals, and established a timeframe over which those differences accumulated with the help of the fossil record and intricate mathematical models.
“In this study we’ve estimated that rates of both morphological and genetic evolution during the Cambrian explosion were five times faster than today—quite rapid, but perfectly consistent with Darwin’s theory of evolution,” Lee says.
ScienceNOW offers the numbers:
The creatures’ genetic codes were changing by about .117% every million years—approximately 5.5 times faster than modern estimates.
Unusual, perhaps, but in line with natural selection, the team indicates. The study appears in the recent edition of Current Biology.
Perhaps Darwin can get some rest now.
Image: Michael Lee | http://www.calacademy.org/explore-science/explosion-explained/ |
4 | Learn something new every day
More Info... by email
Nagle’s algorithm is a system used to improve the efficiency of networks, most notably the Internet. The system involves avoiding data being sent in needlessly small batches, which also increases the number of batches sent. While it has its uses, Nagle’s algorithm can interact poorly with other elements of network communications.
Created by a man named John Nagle, Nagle’s algorithm works with networks which use the TCP/IP protocols. These are protocols, or “rules” for how a network transmits data. While the protocols can apply to any network, they are most commonly associated with the Internet.
The algorithm deals with the way data is transmitted in small chunks, or “packets.” Each packet contains some data plus header information, which is the equivalent of the sender and recipient address on a physical envelope. The packet also contains a checksum, a mathematical equivalent to including a packing list so the recipient knows all the contents of the packet have arrived safely.
While this system normally works well, it can be inefficient if the chunks of data are particularly small. In extreme cases, the data in a packet may only be one byte, but the header information will take up 40 bytes regardless of the size of the data. This is roughly equivalent to writing a letter to somebody, but then cutting it up and sending each word in a separate envelope. In fact because messages are sent in binary, it’s even more inefficient than this. As well as the waste of bandwidth, this also increases the number of packets which must be sent, which increases the chance of an error occurring in the transmission process.
The principle of Nagle’s algorithm is that after sending a packet, the transmitting computer will wait for one of two things to happen before sending the next packet. If it receives confirmation that the last packet has been received, it will send the data it has immediately, regardless of its size. Otherwise, it will wait until it has a “full” packet to send. Once this happens it will send the full packet whether or not the previous packet has been received.
In some situations, Nagle’s algorithm can do more harm than good. One example is that of online video games which are designed with the assumption that data will be sent immediately. If Nagle’s algorithm is being used, some data will be delayed until a full packet is ready. This can have a noticeable effect on how responsive the game feels to a player and will effectively slow their reaction times compared with other players.
if TcpAckFrequency=1 is disable
is TcpAckFrequency=0 enable?
or to enable it again do you just delete the line in the registry? | http://www.wisegeek.com/what-is-nagles-algorithm.htm |
4.375 | Cells are the basic building blocks of all living things. The human body is composed of trillions of cells. They provide structure for the body, take in nutrients from food, convert those nutrients into energy, and carry out specialized functions. Cells also contain the body’s hereditary material and can make copies of themselves.
Cells have many parts, each with a different function. Some of these parts, called organelles, are specialized structures that perform certain tasks within the cell. Human cells contain the following major parts, listed in alphabetical order:
- Cytoplasm (illustration)
Within cells, the cytoplasm is made up of a jelly-like fluid (called the cytosol) and other structures that surround the nucleus.
The cytoskeleton is a network of long fibers that make up the cell’s structural framework. The cytoskeleton has several critical functions, including determining cell shape, participating in cell division, and allowing cells to move. It also provides a track-like system that directs the movement of organelles and other substances within cells.
- Endoplasmic reticulum (ER) (illustration)
This organelle helps process molecules created by the cell. The endoplasmic reticulum also transports these molecules to their specific destinations either inside or outside the cell.
- Golgi apparatus (illustration)
The Golgi apparatus packages molecules processed by the endoplasmic reticulum to be transported out of the cell.
- Lysosomes and peroxisomes (illustration)
These organelles are the recycling center of the cell. They digest foreign bacteria that invade the cell, rid the cell of toxic substances, and recycle worn-out cell components.
- Mitochondria (illustration)
Mitochondria are complex organelles that convert energy from food into a form that the cell can use. They have their own genetic material, separate from the DNA in the nucleus, and can make copies of themselves.
- Nucleus (illustration)
The nucleus serves as the cell’s command center, sending directions to the cell to grow, mature, divide, or die. It also houses DNA (deoxyribonucleic acid), the cell’s hereditary material. The nucleus is surrounded by a membrane called the nuclear envelope, which protects the DNA and separates the nucleus from the rest of the cell.
- Plasma membrane (illustration)
The plasma membrane is the outer lining of the cell. It separates the cell from its environment and allows materials to enter and leave the cell.
- Ribosomes (illustration)
Ribosomes are organelles that process the cell’s genetic instructions to create proteins. These organelles can float freely in the cytoplasm or be connected to the endoplasmic reticulum (see above).
For more information about cells:
The Genetic Science Learning Center at the University of Utah offers an interactive introduction to cells and their many functions.
Arizona State University’s “Ask a Biologist” provides a description and illustration of each of the cell’s organelles.
Queen Mary University of London allows you to explore a 3-D cell and its parts.
Additional information about the cytoskeleton, including an illustration, is available from the Cytoplasm Tutorial. This resource is part of The Biology Project at the University of Arizona.
Next: What is DNA? | http://ghr.nlm.nih.gov/handbook/basics/cell |
4.0625 | Infinity (symbol: ∞) is an abstract concept describing something without any bound and is relevant in a number of fields, predominantly mathematics and physics. In mathematics, "infinity" is often treated as if it were a number (i.e., it counts or measures things: "an infinite number of terms") but it is not the same sort of number as natural or real numbers.
Georg Cantor formalized many ideas related to infinity and infinite sets during the late 19th and early 20th centuries. In the theory he developed, there are infinite sets of different sizes (called cardinalities). For example, the set of integers is countably infinite, while the infinite set of real numbers is uncountable.
- 1 History
- 2 Mathematics
- 3 Physics
- 4 Logic
- 5 Computing
- 6 Arts and cognitive sciences
- 7 See also
- 8 Notes
- 9 References
- 10 Further reading
- 11 External links
Ancient cultures had various ideas about the nature of infinity. The ancient Indians and Greeks did not define infinity in precise formalism as does modern mathematics, and instead approached infinity as a philosophical concept.
The earliest recorded idea of infinity comes from Anaximander, a pre-Socratic Greek philosopher who lived in Miletus. He used the word apeiron which means infinite or limitless. However, the earliest attestable accounts of mathematical infinity come from Zeno of Elea (c. 490 BCE? – c. 430 BCE?), a pre-Socratic Greek philosopher of southern Italy and member of the Eleatic School founded by Parmenides. Aristotle called him the inventor of the dialectic. He is best known for his paradoxes, described by Bertrand Russell as "immeasurably subtle and profound".
In accordance with the traditional view of Aristotle, the Hellenistic Greeks generally preferred to distinguish the potential infinity from the actual infinity; for example, instead of saying that there are an infinity of primes, Euclid prefers instead to say that there are more prime numbers than contained in any given collection of prime numbers (Elements, Book IX, Proposition 20).
However, recent readings of the Archimedes Palimpsest have hinted that Archimedes at least had an intuition about actual infinite quantities.
The Indian mathematical text Surya Prajnapti (c. 4th–3rd century BCE) classifies all numbers into three sets: enumerable, innumerable, and infinite. Each of these was further subdivided into three orders:
- Enumerable: lowest, intermediate, and highest
- Innumerable: nearly innumerable, truly innumerable, and innumerably innumerable
- Infinite: nearly infinite, truly infinite, infinitely infinite
In this work, two basic types of infinite numbers are distinguished. On both physical and ontological grounds, a distinction was made between asaṃkhyāta ("countless, innumerable") and ananta ("endless, unlimited"), between rigidly bounded and loosely bounded infinities.
European mathematicians started using infinite numbers in a systematic fashion in the 17th century. John Wallis first used the notation for such a number, and exploited it in area calculations by dividing the region into infinitesimal strips of width on the order of . Euler used the notation for an infinite number, and exploited it by applying the binomial formula to the 'th power, and infinite products of factors.
The infinity symbol (sometimes called the lemniscate) is a mathematical symbol representing the concept of infinity. The symbol is encoded in Unicode at U+221E ∞ infinity (HTML
∞) and in LaTeX as
Leibniz, one of the co-inventors of infinitesimal calculus, speculated widely about infinite numbers and their use in mathematics. To Leibniz, both infinitesimals and infinite quantities were ideal entities, not of the same nature as appreciable quantities, but enjoying the same properties in accordance with the Law of Continuity.
In real analysis, the symbol , called "infinity", is used to denote an unbounded limit. means that x grows without bound, and means the value of x is decreasing without bound. If f(t) ≥ 0 for every t, then
- means that f(t) does not bound a finite area from to
- means that the area under f(t) is infinite.
- means that the total area under f(t) is finite, and equals
Infinity is also used to describe infinite series:
- means that the sum of the infinite series converges to some real value .
- means that the sum of the infinite series diverges in the specific sense that the partial sums grow without bound.
Infinity can be used not only to define a limit but as a value in the extended real number system. Points labeled and can be added to the topological space of the real numbers, producing the two-point compactification of the real numbers. Adding algebraic properties to this gives us the extended real numbers. We can also treat and as the same, leading to the one-point compactification of the real numbers, which is the real projective line. Projective geometry also refers to a line at infinity in plane geometry, a plane at infinity in three-dimensional space, and so forth for higher dimensions.
In complex analysis the symbol , called "infinity", denotes an unsigned infinite limit. means that the magnitude of x grows beyond any assigned value. A point labeled can be added to the complex plane as a topological space giving the one-point compactification of the complex plane. When this is done, the resulting space is a one-dimensional complex manifold, or Riemann surface, called the extended complex plane or the Riemann sphere. Arithmetic operations similar to those given above for the extended real numbers can also be defined, though there is no distinction in the signs (therefore one exception is that infinity cannot be added to itself). On the other hand, this kind of infinity enables division by zero, namely for any nonzero complex number z. In this context it is often useful to consider meromorphic functions as maps into the Riemann sphere taking the value of at the poles. The domain of a complex-valued function may be extended to include the point at infinity as well. One important example of such functions is the group of Möbius transformations.
The original formulation of infinitesimal calculus by Isaac Newton and Gottfried Leibniz used infinitesimal quantities. In the twentieth century, it was shown that this treatment could be put on a rigorous footing through various logical systems, including smooth infinitesimal analysis and nonstandard analysis. In the latter, infinitesimals are invertible, and their inverses are infinite numbers. The infinities in this sense are part of a hyperreal field; there is no equivalence between them as with the Cantorian transfinites. For example, if H is an infinite number, then H + H = 2H and H + 1 are distinct infinite numbers. This approach to non-standard calculus is fully developed in Keisler (1986).
A different form of "infinity" are the ordinal and cardinal infinities of set theory. Georg Cantor developed a system of transfinite numbers, in which the first transfinite cardinal is aleph-null (ℵ0), the cardinality of the set of natural numbers. This modern mathematical conception of the quantitative infinite developed in the late nineteenth century from work by Cantor, Gottlob Frege, Richard Dedekind and others, using the idea of collections, or sets.
Dedekind's approach was essentially to adopt the idea of one-to-one correspondence as a standard for comparing the size of sets, and to reject the view of Galileo (which derived from Euclid) that the whole cannot be the same size as the part (however, see Galileo's paradox where he concludes that positive integers which are squares and all positive integers are the same size). An infinite set can simply be defined as one having the same size as at least one of its proper parts; this notion of infinity is called Dedekind infinite. The diagram gives an example: viewing lines as infinite sets of points, the left half of the lower blue line can be mapped in a one-to-one manner (green correspondences) to the higher blue line, and, in turn, to the whole lower blue line (red correspondences); therefore the whole lower blue line and its left half have the same cardinality, i.e. "size".
Cantor defined two kinds of infinite numbers: ordinal numbers and cardinal numbers. Ordinal numbers may be identified with well-ordered sets, or counting carried on to any stopping point, including points after an infinite number have already been counted. Generalizing finite and the ordinary infinite sequences which are maps from the positive integers leads to mappings from ordinal numbers, and transfinite sequences. Cardinal numbers define the size of sets, meaning how many members they contain, and can be standardized by choosing the first ordinal number of a certain size to represent the cardinal number of that size. The smallest ordinal infinity is that of the positive integers, and any set which has the cardinality of the integers is countably infinite. If a set is too large to be put in one to one correspondence with the positive integers, it is called uncountable. Cantor's views prevailed and modern mathematics accepts actual infinity.[page needed] Certain extended number systems, such as the hyperreal numbers, incorporate the ordinary (finite) numbers and infinite numbers of different sizes.
Cardinality of the continuum
One of Cantor's most important results was that the cardinality of the continuum is greater than that of the natural numbers ; that is, there are more real numbers R than natural numbers N. Namely, Cantor showed that (see Cantor's diagonal argument or Cantor's first uncountability proof).
The continuum hypothesis states that there is no cardinal number between the cardinality of the reals and the cardinality of the natural numbers, that is, (see Beth one). However, this hypothesis can neither be proved nor disproved within the widely accepted Zermelo–Fraenkel set theory, even assuming the Axiom of Choice.
Cardinal arithmetic can be used to show not only that the number of points in a real number line is equal to the number of points in any segment of that line, but that this is equal to the number of points on a plane and, indeed, in any finite-dimensional space.
The first of these results is apparent by considering, for instance, the tangent function, which provides a one-to-one correspondence between the interval (−π/2, π/2) and R (see also Hilbert's paradox of the Grand Hotel). The second result was proved by Cantor in 1878, but only became intuitively apparent in 1890, when Giuseppe Peano introduced the space-filling curves, curved lines that twist and turn enough to fill the whole of any square, or cube, or hypercube, or finite-dimensional space. These curves can be used to define a one-to-one correspondence between the points in the side of a square and those in the square.
Geometry and topology
Infinite-dimensional spaces are widely used in geometry and topology, particularly as classifying spaces, notably Eilenberg−MacLane spaces. Common examples are the infinite-dimensional complex projective space K(Z,2) and the infinite-dimensional real projective space K(Z/2Z,1).
The structure of a fractal object is reiterated in its magnifications. Fractals can be magnified indefinitely without losing their structure and becoming "smooth"; they have infinite perimeters—some with infinite, and others with finite surface areas. One such fractal curve with an infinite perimeter and finite surface area is the Koch snowflake.
Mathematics without infinity
Leopold Kronecker was skeptical of the notion of infinity and how his fellow mathematicians were using it in the 1870s and 1880s. This skepticism was developed in the philosophy of mathematics called finitism, an extreme form of the philosophical and mathematical schools of constructivism and intuitionism.
|This section does not cite any sources. (December 2009)|
In physics, approximations of real numbers are used for continuous measurements and natural numbers are used for discrete measurements (i.e. counting). It is therefore assumed by physicists that no measurable quantity could have an infinite value. For instance, by taking an infinite value in an extended real number system, or by requiring the counting of an infinite number of events. It is, for example, presumed impossible for any type of body to have infinite mass or infinite energy. Concepts of infinite things such as an infinite plane wave exist, but there are no experimental means to generate them.
Theoretical applications of physical infinity
The practice of refusing infinite values for measurable quantities does not come from a priori or ideological motivations, but rather from more methodological and pragmatic motivations[disputed ]. One of the needs of any physical and scientific theory is to give usable formulas that correspond to or at least approximate reality. As an example if any object of infinite gravitational mass were to exist, any usage of the formula to calculate the gravitational force would lead to an infinite result, which would be of no benefit since the result would be always the same regardless of the position and the mass of the other object. The formula would be useful neither to compute the force between two objects of finite mass nor to compute their motions. If an infinite mass object were to exist, any object of finite mass would be attracted with infinite force (and hence acceleration) by the infinite mass object, which is not what we can observe in reality. Sometimes infinite result of a physical quantity may mean that the theory being used to compute the result may be approaching the point where it fails. This may help to indicate the limitations of a theory.
This point of view does not mean that infinity cannot be used in physics. For convenience's sake, calculations, equations, theories and approximations often use infinite series, unbounded functions, etc., and may involve infinite quantities. Physicists however require that the end result be physically meaningful. In quantum field theory infinities arise which need to be interpreted in such a way as to lead to a physically meaningful result, a process called renormalization.
However, there are some theoretical circumstances where the end result is infinity. One example is the singularity in the description of black holes. Some solutions of the equations of the general theory of relativity allow for finite mass distributions of zero size, and thus infinite density. This is an example of what is called a mathematical singularity, or a point where a physical theory breaks down. This does not necessarily mean that physical infinities exist; it may mean simply that the theory is incapable of describing the situation properly. Two other examples occur in inverse-square force laws of the gravitational force equation of Newtonian gravity and Coulomb's law of electrostatics. At r=0 these equations evaluate to infinities.
The first published proposal that the universe is infinite came from Thomas Digges in 1576. Eight years later, in 1584, the Italian philosopher and astronomer Giordano Bruno proposed an unbounded universe in On the Infinite Universe and Worlds: "Innumerable suns exist; innumerable earths revolve around these suns in a manner similar to the way the seven planets revolve around our sun. Living beings inhabit these worlds."
Cosmologists have long sought to discover whether infinity exists in our physical universe: Are there an infinite number of stars? Does the universe have infinite volume? Does space "go on forever"? This is an open question of cosmology. Note that the question of being infinite is logically separate from the question of having boundaries. The two-dimensional surface of the Earth, for example, is finite, yet has no edge. By travelling in a straight line with respect to the Earth's curvature one will eventually return to the exact spot one started from. The universe, at least in principle, might have a similar topology. If so, one might eventually return to one's starting point after travelling in a straight line through the universe for long enough.
If, on the other hand, the universe were not curved like a sphere but had a flat topology, it could be both unbounded and infinite. The curvature of the universe can be measured through multipole moments in the spectrum of the cosmic background radiation. As to date, analysis of the radiation patterns recorded by the WMAP spacecraft hints that the universe has a flat topology. This would be consistent with an infinite physical universe.
However, the universe could also be finite, even if its curvature is flat. An easy way to understand this is to consider two-dimensional examples, such as video games where items that leave one edge of the screen reappear on the other. The topology of such games is toroidal and the geometry is flat. Many possible bounded, flat possibilities also exist for three-dimensional space.
In logic an infinite regress argument is "a distinctively philosophical kind of argument purporting to show that a thesis is defective because it generates an infinite series when either (form A) no such series exists or (form B) were it to exist, the thesis would lack the role (e.g., of justification) that it is supposed to play."
The IEEE floating-point standard (IEEE 754) specifies the positive and negative infinity values. These are defined as the result of arithmetic overflow, division by zero, and other exceptional operations.
Some programming languages, such as Java and J, allow the programmer an explicit access to the positive and negative infinity values as language constants. These can be used as greatest and least elements, as they compare (respectively) greater than or less than all other values. They have uses as sentinel values in algorithms involving sorting, searching, or windowing.
In languages that do not have greatest and least elements, but do allow overloading of relational operators, it is possible for a programmer to create the greatest and least elements. In languages that do not provide explicit access to such values from the initial state of the program, but do implement the floating-point data type, the infinity values may still be accessible and usable as the result of certain operations.
Arts and cognitive sciences
Perspective artwork utilizes the concept of imaginary vanishing points, or points at infinity, located at an infinite distance from the observer. This allows artists to create paintings that realistically render space, distances, and forms. Artist M. C. Escher is specifically known for employing the concept of infinity in his work in this and other ways.
Cognitive scientist George Lakoff considers the concept of infinity in mathematics and the sciences as a metaphor. This perspective is based on the basic metaphor of infinity (BMI), defined as the ever-increasing sequence <1,2,3,...>.
The symbol is often used romantically to represent eternal love. Several types of jewelry are fashioned into the infinity shape for this purpose.
- Aleph number
- Indeterminate form
- Infinite monkey theorem
- Infinite set
- Paradoxes of infinity
- Surreal number
- Gowers, Timothy; Barrow-Green, June; Leader, Imre (2008). The Princeton Companion to Mathematics. Princeton University Press. p. 616. ISBN 0-691-11880-9. Extract of page 616
- Maddox 2002, pp. 113 –117
- Wallace 2004, pg. 44
- Scott, Joseph Frederick (1981), The mathematical work of John Wallis, D.D., F.R.S., (1616–1703) (2 ed.), American Mathematical Society, p. 24, ISBN 0-8284-0314-7.
- Martin-Löf, Per (1990), "Mathematics of infinity", COLOG-88 (Tallinn, 1988), Lecture Notes in Computer Science 417, Berlin: Springer, pp. 146–197, doi:10.1007/3-540-52335-9_54, MR 1064143.
- O'Flaherty, Wendy Doniger (1986), Dreams, Illusion, and Other Realities, University of Chicago Press, p. 243, ISBN 9780226618555.
- Toker, Leona (1989), Nabokov: The Mystery of Literary Structures, Cornell University Press, p. 159, ISBN 9780801422119.
- Continuity and Infinitesimals entry by John Lane Bell in the Stanford Encyclopedia of Philosophy
- Jesseph, Douglas Michael (1998). "Leibniz on the Foundations of the Calculus: The Question of the Reality of Infinitesimal Magnitudes". Perspectives on Science 6 (1&2): 6–40. ISSN 1063-6145. OCLC 42413222. Archived from the original on 16 February 2010. Retrieved 16 February 2010.
- Taylor 1955, p. 63
- These uses of infinity for integrals and series can be found in any standard calculus text, such as, Swokoski 1983, pp. 468-510
- Aliprantis, Charalambos D.; Burkinshaw, Owen (1998), Principles of Real Analysis (3rd ed.), San Diego, CA: Academic Press, Inc., p. 29, ISBN 0-12-050257-7, MR 1669668.
- Gemignani 1990, p. 177
- Moore, A. W. (1991). The Infinite. Routledge.
- Kline, Morris (1972). Mathematical Thought from Ancient to Modern Times. New York: Oxford University Press. pp. 1197–1198. ISBN 0-19-506135-7.
- Doric Lenses - Application Note - Axicons - 2. Intensity Distribution. Retrieved 7 April 2014.
- John Gribbin (2009), In Search of the Multiverse: Parallel Worlds, Hidden Dimensions, and the Ultimate Quest for the Frontiers of Reality, ISBN 9780470613528. p. 88
- Weeks, Jeffrey (December 12, 2001). The Shape of Space. CRC Press. ISBN 978-0824707095.
- Kaku, M. (2006). Parallel worlds. Knopf Doubleday Publishing Group.
- Cambridge Dictionary of Philosophy, Second Edition, p. 429
- Gosling, James; et. al. (27 July 2012). "4.2.3.". The Java™ Language Specification (Java SE 7 ed.). California, U.S.A.: Oracle America, Inc. Retrieved 6 September 2012.
- Stokes, Roger (July 2012). "19.2.1". Learning J. Retrieved 6 September 2012.
- Kline, Morris (1985). Mathematics for the nonmathematician. Courier Dover Publications. p. 229. ISBN 0-486-24823-2., Section 10-7, p. 229
- Gemignani, Michael C. (1990), Elementary Topology (2nd ed.), Dover, ISBN 0-486-66522-4
- Keisler, H. Jerome (1986), Elementary Calculus: An Approach Using Infinitesimals (2nd ed.)
- Maddox, Randall B. (2002), Mathematical Thinking and Writing: A Transition to Abstract Mathematics, Academic Press, ISBN 0-12-464976-9
- Swokowski, Earl W. (1983), Calculus with Analytic Geometry (Alternate ed.), Prindle, Weber & Schmidt, ISBN 0-87150-341-7
- Taylor, Angus E. (1955), Advanced Calculus, Blaisdell Publishing Company
- David Foster Wallace (2004). Everything and More: A Compact History of Infinity. Norton, W. W. & Company, Inc. ISBN 0-393-32629-2.
- Amir D. Aczel (2001). The Mystery of the Aleph: Mathematics, the Kabbalah, and the Search for Infinity. New York: Pocket Books. ISBN 0-7434-2299-6.
- D. P. Agrawal (2000). Ancient Jaina Mathematics: an Introduction, Infinity Foundation.
- Bell, J. L.: Continuity and infinitesimals. Stanford Encyclopedia of philosophy. Revised 2009.
- L. C. Jain (1982). Exact Sciences from Jaina Sources.
- L. C. Jain (1973). "Set theory in the Jaina school of mathematics", Indian Journal of History of Science.
- George G. Joseph (2000). The Crest of the Peacock: Non-European Roots of Mathematics (2nd ed.). Penguin Books. ISBN 0-14-027778-1.
- Eli Maor (1991). To Infinity and Beyond. Princeton University Press. ISBN 0-691-02511-8.
- Rudy Rucker (1995). Infinity and the Mind: The Science and Philosophy of the Infinite. Princeton University Press. ISBN 0-691-00172-3.
- Navjyoti Singh (1988). Jaina Theory of Actual Infinity and Transfinite Numbers. Journal of Asiatic Society 30.
|Look up infinity in Wiktionary, the free dictionary.|
|Wikibooks has a book on the topic of: Infinity is not a number|
|Wikimedia Commons has media related to Infinity.|
- The Infinite entry in the Internet Encyclopedia of Philosophy
- Infinity on In Our Time at the BBC. (listen now)
- A Crash Course in the Mathematics of Infinite Sets, by Peter Suber. From the St. John's Review, XLIV, 2 (1998) 1–59. The stand-alone appendix to Infinite Reflections, below. A concise introduction to Cantor's mathematics of infinite sets.
- Infinite Reflections, by Peter Suber. How Cantor's mathematics of the infinite solves a handful of ancient philosophical problems of the infinite. From the St. John's Review, XLIV, 2 (1998) 1–59.
- Grime, James. "Infinity is bigger than you think". Numberphile. Brady Haran.
- Infinity, Principia Cybernetica
- Hotel Infinity
- John J. O'Connor and Edmund F. Robertson (1998). 'Georg Ferdinand Ludwig Philipp Cantor', MacTutor History of Mathematics archive.
- John J. O'Connor and Edmund F. Robertson (2000). 'Jaina mathematics', MacTutor History of Mathematics archive.
- Ian Pearce (2002). 'Jainism', MacTutor History of Mathematics archive.
- Source page on medieval and modern writing on Infinity
- The Mystery Of The Aleph: Mathematics, the Kabbalah, and the Search for Infinity
- Dictionary of the Infinite (compilation of articles about infinity in physics, mathematics, and philosophy) | https://en.wikipedia.org/wiki/Infinity |
4.09375 | The archaeological site of Jiahu in the Yellow River basin of Henan Province, central China, is remarkable for the cultural and artistic remains uncovered there. These remains, such as houses, kilns, pottery, turquoise carvings, tools made from stone and bone—and most remarkably—bone flutes, are evidence of a flourishing and complex society as early as the Neolithic period, when Jiahu was first occupied.
Fragments of thirty flutes were discovered in the burials at Jiahu and six of these represent the earliest examples of playable musical instruments ever found. The flutes were carved from the wing bone of the red-crowned crane, with five to eight holes capable of producing varied sounds in a nearly accurate octave. The intended use of the flutes for the Neolithic musician is unknown, but it is speculated that they functioned in rituals and special ceremonies. Chinese myths known from nearly 6,000 years after the flutes were made tell of the cosmological importance of music and the association of flute playing and cranes. The sound of the flutes is alleged to lure cranes to a waiting hunter. Whether the same association between flutes and cranes existed for the Neolithic inhabitants at Jiahu is not known, but the remains there may provide clues to the underpinnings of later cultural traditions in central China.
Pictograms, signs carved on tortoiseshells, were also uncovered at Jiahu. In later Chinese culture dating to around 3500 B.C., shells were used as a form of divination. They were subjected to intense heat and the cracks that formed were read as omens. The cracks were then carved as permanent marks on the surface of the shell. The evidence of shell pictograms from Jiahu may indicate that this tradition, or a related one, has much deeper roots than previously considered.
Tedesco, Laura Anne. “Jiahu (ca. 7000–5700 B.C.).” In Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art, 2000–. http://www.metmuseum.org/toah/hd/jiah/hd_jiah.htm (October 2000)
Chang, Kwang-chih. The Archaeology of Ancient China. 4th ed. New Haven: Yale University Press, 1986.
Keightley, David N., ed. The Origins of Chinese Civilization. Berkeley: University of California Press, 1983. | http://metmuseum.org/toah/hd/jiah/hd_jiah.htm |
4.25 | It's hard to imagine what Mars might've looked like when its valleys were filled with water and its thick atmosphere sported white fluffy clouds. Now try to imagine what ancient Mars might look like with its own biosphere.
Using elevation data from NASA's Mars Reconnaissance Orbiter, software engineer Kevin Gill was inspired to create a virtual version of the red planet with a difference. "I had been doing similar models of Earth and have seen attempts by others of showing life on Mars, so I figured I'd give it a go," Gill told Discovery News. "It was a good way to learn about the planet, be creative and improve the software I was rendering it in."
SEE ALSO: Extraterrestrial Oceans Could Host Life
In the rendering, a huge ocean fills one side of the planet, feeding one of the longest valleys in the solar system, Vallis Marineris. The peaks of Mars' huge volcanoes — Olympus Mons, Pavonis Mons, Ascraeus Mons and Arsia Mons — dominate the Tharsis Bulge with their peaks poking above the atmosphere. Gill imagined that the high-altitude equatorial volcanic region would likely be a desert where little vegetation would grow, whereas lower latitudes would support a wetter climate boosting the presence of greenery.
As we send more missions to Mars, it's becoming clear that the planet was once a wet world with features that were very Earth-like. For example, NASA's recently landed Mars Science Laboratory rover Curiosity touched down on an ancient riverbed inside Gale Crater where water, perhaps two feet deep, used to flow. Evidence of clays near Mars Exploration Rover Opportunity are also evidence that minerals have interacted with surface water some time in the past.
SEE ALSO: Mars Formed Fast, Stayed Small
There's also evidence for huge gullies and river deltas that can be seen from orbit. Ancient coastlines have also been spotted, forming the outline of a vast ocean that likely filled the deep Vastitas Borealis basin in the northern hemisphere.
It is also known that the Martian atmosphere was a lot thicker than it is now. Over the eons, the solar wind has been eroding the upper atmosphere — as there's no global magnetic field to deflect the solar wind efficiently. So far, though, there is little evidence that the world used to support any alien flora or fauna, let alone an entire biosphere.
Although Gill is the first to admit that he made several assumptions as to what Mars might look like with an Earth-like biosphere, the image is striking.
"I am a software engineer by trade and certainly not a planetary scientist, so most of my assumptions were based on simply comparing the Mars terrain to similar features here on Earth (e.g. elevation, proximity to bodies of water, physical features, geographical position, etc) and then using the corresponding textures from the Blue Marble images," he added.
For those with a fascination for Mars, this is a wonderful alternative version of a planet that we are so used to seeing in different shades of lifeless red.
Image courtesy of Kevin Gill
- When Apollo 14 Landed on the Moon: The 'Forgotten' Photos
- Johnny Cash Tarantula One of 14 New Spiders
- Whopper of a Wind Farm to Power 1 Million
- Great Lakes Nearly Ice-Free: Thanks, El Nino
This article originally published at Discovery News here | http://mashable.com/2013/01/03/ancient-mars-blue/ |
4.125 | High German consonant shift
In historical linguistics, the High German consonant shift or second Germanic consonant shift is a phonological development (sound change) that took place in the southern parts of the West Germanic dialect continuum in several phases. It probably began between the third and fifth centuries and was almost complete before the earliest written records in High German were produced in the ninth century. The resulting language, Old High German, can be neatly contrasted with the other continental West Germanic languages, which for the most part did not experience the shift, and with Old English, which remained completely unaffected.
- 1 General description
- 2 Overview table
- 3 Core group
- 4 Other changes
- 5 Chronology
- 6 Geographical distribution
- 7 Lombardic
- 8 Sample texts
- 9 Irregularities in modern Standard German
- 10 See also
- 11 References
- 12 Sources
The High German consonant shift altered a number of consonants in the Southern German dialects, and thus also in modern Standard German, Yiddish, and Luxembourgish, and so explains why many German words have different consonants from the obviously related words in English, Dutch and the Scandinavian languages. The term is sometimes used to refer to a core group of nine individual consonant modifications. Alternatively, it may encompass other phonological changes that took place in the same period. For the core group, there are three changes, which may be thought of as three successive phases. Each phase affected three consonants, making nine modifications in total:
- The three Germanic voiceless plosives became fricatives in certain phonetic environments (English ship //, Dutch schip [sxɪp], Norwegian skip [ʃɪp] map to German Schiff [ʃɪf]);
- The same sounds became affricates in other positions (apple //, appel [ˈɑpəl], eple [ɛ:ple]: Apfel [ˈʔap͡fəl]); and
- The three voiced plosives became voiceless (door //, deur [døːr], dør [døːr] : Tür [tyːɐ̯]).
Since phases 1 and 2 affect the same voiceless sounds, some scholars find it more convenient to treat them together, thus making for only a two phase process: shifts in voiceless consonants (phases 1–2 of the three phase model) and in voiced consonants (phase 3). The two phase model has advantages for typology, but it does not reflect chronology.
Of the other changes that sometimes are bracketed within the High German consonant shift, the most important (sometimes thought of as the fourth phase) is:
- 4. /θ/ (and its allophone [ð]) became /d/ (this // : dies [diːs]). However, this also applies to Dutch (this : dit [dɪt]) Norwegian, Danish, and Swedish, but not Icelandic (this : dette [dɛte] / detta [dɛta] , but þetta [θe:ʱta]).
This phenomenon is known as the "High German" consonant shift because it affects the High German dialects (i.e. those of the mountainous south), principally the Upper German dialects, though in part it also affects the Central German dialects. However the fourth phase also included Low German and Dutch. It is also known as the "second Germanic" consonant shift to distinguish it from the "(first) Germanic consonant shift" as defined by Grimm's law, and its refinement, Verner's law.
The High German consonant shift did not occur in a single movement, but rather as a series of waves over several centuries. The geographical extent of these waves varies. They all appear in the southernmost dialects, and spread northwards to differing degrees, giving the impression of a series of pulses of varying force emanating from what is now Austria and Switzerland. Whereas some are found only in the southern parts of Alemannic (which includes Swiss German) or Bavarian (which includes Austrian), most are found throughout the Upper German area, and some spread on into the Central German dialects. Indeed, Central German is often defined as the area between the Appel/Apfel and the Schip/Schiff boundaries, thus between complete shift of Germanic /p/ (Upper German) and complete lack thereof (Low German). The shift /θ/ > /d/ was more successful; it spread all the way to the North Sea and affected Dutch as well as German. Most, but not all of these changes have become part of modern Standard German.
The High German consonant shift is a good example of a chain shift, as was its predecessor, the first Germanic consonant shift. For example, phases 1 and 2 left the language without a /t/ phoneme, as this had shifted to /s/ or /t͡s/. Phase 3 filled this gap (/d/ > /t/), but left a new gap at /d/, which phase 4 then filled (/θ/ > /d/).
The effects of the shift are most obvious for the non-specialist when comparing Modern German lexemes containing shifted consonants with their Modern English or Dutch unshifted equivalents. The following overview table is arranged according to the original Proto-Indo-European (PIE) phonemes. Note that the pairs of words used to illustrate sound shifts must be cognates; they need not be semantic equivalents. German Zeit means 'time' but it is cognate with tide, and only the latter is relevant here.
|PIE > Germanic||Phase||High German Shift
Germanic > OHG
|Examples (Modern German)||Century||Geographical Extent3||Standard
|G: /b/ > /p/1||1||/p/ > /ff/||schlafen, Schiff
cf. sleep, ship, Dutch slapen, schip, Norwegian slappe av (relax), skip
|4/5||Upper and Central German||Yes||Yes||No|
|2||/p/ > /p͡f/||Pflug, Apfel, Pfad, Pfuhl, scharf 4
cf. plough, apple, path, pool, sharp, Dutch ploeg, appel, pad, poel, scherp, Norwegian plog, eple, skarp
|G: /d/ > /t/||1||/t/ > /t͡s/ > /ss/||essen, dass, aus 5
cf. eat, that, out, Dutch eten, dat, uit, Norwegian ete, det, ut
|4/5||Upper and Central German||Yes||Yes||No|
|2||/t/ > /tːs/ > /t͡s/||Zeit6, Zwei6, Zehe
cf. tide, two, toe, Dutch tijd, twee, teen, Norwegian tid, to, tå
|5/6||Upper and Central German||Yes||Yes||No|
|G: /ɡ/ > /k/||1||/k/ > /x/||machen, brechen, ich
cf. make, break, I Dutch maken, breken, ik 7
|4/5||Upper and Central German||Yes||Yes||No|
|2||/k/ > /k͡x/||Bavarian: Kchind
cf. German Kind, Dutch kind "child"
and High Alemannic
|G: /bʱ/ > [β]
V: /p/ > /ɸ/ > [β]2
|[β] > [b]||geben, weib
cf. give, wife, Dutch geven, wijf
|7/8||Upper German and some varieties of Central German||Yes||No||No|
|G: /dʱ/ > [ð]
V: /t/ > /θ/ > [ð]
|[ð] > [d]||gut, English good, Dutch goed', Norwegian god, Icelandic góður||2-4||Throughout West Germanic||Yes||Yes||Yes|
|G: /ɡʱ/ > [ɣ]
V: /k/ > /x/ > [ɣ]
|[ɣ] > [ɡ]||gut
cf. Dutch goed
|7/8||Upper German and some varieties of Central German||Yes||Partly||No|
|G: (/bʱ/ >) [β] > [b]
V: (/p/ > /ɸ/ >) [β] > [b]
|3||/b/ > /p/||Bavarian: perg, pist
cf. German Berg "hill", bist "(you) are"
|8/9||Parts of Bavarian/Alemanic; other Upper German only for geminates||Partly||Partly||No|
|G: (/dʱ/ >) [ð] > [d]
V: (/t/ > /θ/ >) [ð] > [d]
|3||/d/ > /t/||Tag, Mitte, Vater
cf. day, middle, Dutch Dag, Middel, Vader "father"8
|G: (/ɡʱ/ >) [ɣ] > [ɡ]
V: (/k/ > /x/ >) [ɣ] > [ɡ]
|3||/ɡ/ > /k/||Bavarian: Kot
cf. German Gott "God"; German Brücke
cf. English bridge, Dutch brug
|8/9||Parts of Bavarian/Alemanic; other Upper German only for geminates||Partly||Partly||No|
|G: /t/ > /θ/||4||/θ/ > [ð] > /d/||Dorn, Distel, durch, Bruder
cf. thorn, thistle, through, brother
|9/10||Throughout continental West Germanic||Yes||Yes||Yes|
- G: Grimm's law
- V: Verner's law
- Approximate, isoglosses may vary.
- Old High German scarph, Middle High German scharpf.
- Old High German ezzen, daz, ūz.
- Note that in modern German ⟨z⟩ is pronounced /t͡s/.
- Old English ic, "I".
- Old English fæder, "father"; English has shifted d > th in a few OE words ending in vowel + -der.
The first phase, which affected the whole of the High German area, affected the voiceless plosives /p/, /t/ and /k/ in intervocalic and word-final position. These became geminated (long) fricatives, except in word-final position where they were shortened and merged with the existing single consonants. Geminate plosives in words like *appul "apple" or *katta "cat" were not affected, nor were plosives preceded by another consonant like in *skarp "sharp" or *hert "heart". These remained unshifted until the second phase.
- /p/ > /ff/ (> /f/ finally)
- /t/ > ⟨zz⟩ (> ⟨z⟩ finally)
- /k/ > /xx/ (> /x/ finally)
/p/ presumably went through an intermediate bilabial stage /ɸ/, although no distinction between /ɸ/ and /f/ was made in writing. It can be assumed that the two sounds merged early on.
The letter ⟨z⟩ stands for a voiceless fricative that is distinct somehow from ⟨s⟩. The exact nature of the distinction is unknown; possibly ⟨s⟩ was apical while ⟨z⟩ was laminal. It remained distinct from /s/ throughout Old High German and most of the Middle High German period, and was not affected by the late Old High German voicing of prevocalic /s/ to /z/.
This phase has been dated as early as the 4th century, though this is highly debated. The first certain examples of the shift are from the Edictus Rothari (a. 643, oldest extant manuscript after 650), a Latin text of the Lombards. Lombard personal names show *b > /p/, having pert, perg, prand for bert, berg, brand. According to most scholars, the pre-Old High German runic inscriptions of about a. 600 show no convincing trace of the consonant shift.
In many West Central German dialects, the words dat, wat, et ("that, what, it") did not shift to das, was, es, even though t was shifted in other words. It is not quite clear why these exceptions occurred.
- Old English slǣpan : Old High German slāfan (English sleep //, Dutch slapen [ˈslaːpə(n)] : German schlafen [ˈʃlaːfən])
- OE strǣt : OHG strāzza (English street //, Dutch straat [straːt] : German Straße [ˈʃtʁaːsə])
- OE rīce : OHG rīhhi (English rich //, Dutch rijk [rɛi̯k] : German reich [ʁaɪ̯ç])
In the second phase, which was completed by the 8th century, the same sounds became affricates in three environments: in word-initial position; when geminated; and after a liquid consonant (/l/ or /r/) or nasal consonant (/m/ or /n/).
- /p/ > /p͡f/ (also written ⟨ph⟩ in OHG)
- /t/ > /t͡s/ (written ⟨z⟩ or ⟨tz⟩)
- /k/ > /k͡x/ (written ⟨ch⟩ in OHG).
- OE æppel : OHG apful, afful (English apple, Dutch appel, Low German Appel : German Apfel)
- OE scearp : OHG scarpf, scarf (English sharp, Dutch scherp, Low German scharp : German scharf)
- OE catt : OHG kazza (English cat, Dutch kat, Low German Katt : German Katze)
- OE tam : OHG zam (English tame, Dutch tam, Low German tamm : German zahm)
- OE liccian : OHG leckōn (English lick, Dutch likken, Low German licken, German lecken : High Alemannic lekchen, schlecke/schläcke /ˈʃlɛkxə, ˈʃlækxə/)
- OE weorc : OHG werc, werah (English work, Dutch werk, Low German Wark, German Werk : High Alemannic Werch/Wärch)
The shift did not take place where the plosive was preceded by a fricative, i.e. in the combinations /sp, st, sk, ft, ht/. /t/ also remained unshifted in the combination /tr/.
- OE spearwa : OHG sparo (English sparrow, Dutch spreeuw, German Sperling)
- OE mæst : OHG mast (English mast, Dutch mast, Low German Mast, German Mast(baum))
- OE niht : OHG naht (English night, Dutch nacht, Low German Nacht, German Nacht)
- OE trēowe : OHG (gi)triuwi (English true, Dutch (ge)trouw, Low German trü, German treu; the cognates mean "trustworthy","faithful", not "correct","truthful".)
Following /r/ also prevented the shift of /t/ in words which end in -ter in modern Standard German, e.g. bitter, Winter. These stems had /tr/ in OHG inflected forms (bittr-, wintr-).
For the subsequent change of /sk/ > /ʃ/, written ⟨sch⟩, see below.
These affricates (especially /p͡f/) have simplified into fricatives in some dialects. /p͡f/ was simplified to /f/ in a number of circumstances. In Yiddish and some German dialects this occurred in initial positions, e.g., Dutch paard: German Pferd : Yiddish ferd 'horse'. In modern standard German, the pronunciation /f/ for word-initial ⟨pf⟩ is also a very common feature of northern and Central German accents (i.e. in regions where /p͡f/ does not occur in the native dialects; compare German phonology).
There was an even stronger tendency to simplify /p͡f/ after /r/ and /l/. This simplification is also reflected in modern standard German, e.g. werfen 'to throw' ← OHG werfan ← *werpfan, helfen 'to help' ← OHG helfan ← *helpfan. Only one standard word with /rp͡f/ remains: Karpfen 'carp' ← OHG karpfo.
- The shift of /t/ > /t͡s/ occurs throughout the High German area, and is reflected in Modern Standard German.
- The shift of /p/ > /p͡f/ occurs throughout Upper German, but there is wide variation in Central German dialects. Most West Central German dialects are unaffected by the shift (cf. Luxembourgish Päerd ~ Standard German Pferd). In the Rhine Franconian dialects, the further north the dialect, the fewer environments show shifted consonants. In East Central German, the clusters -pp- and -mp- remained untouched. The shift /p/ > /p͡f/ is reflected in standard German, but there are many exceptions to it, i.e. forms adopted with Central or Low German consonantism (Krüppel, Pacht, Schuppen, Tümpel etc.). Moreover, this affricate is infrequent in word-initial position: fewer than 40 word stems with pf- are used in contemporary standard German, mostly early borrowings from Latin. This rareness is partly due to the fact that word-initial p- was virtually absent in Proto-Germanic. Note, however, that the Upper German dialects have many more such words and that they have used pf- productively, which is not the case in standard German.
- The shift of /k/ > /k͡x/ is today geographically highly restricted and seen only in the southernmost Upper German dialects. In mediaeval times, it was much more widespread (almost throughout Upper German), but was later "undone" from the north southward. Tyrolese, the Southern Austro-Bavarian dialect of Tyrol, is the only dialect in which the affricate /k͡x/ has been preserved in all positions, e.g. Cimbrian khòan [ˈk͡xoːən] 'not any' (cf. Germ kein). In High Alemannic, only the geminate is preserved as an affricate, whereas in the other positions, /kx/ has been simplified to /x/, e.g. HAlem chleubä 'to adhere, stick' (cf. Germ kleben). Initial /k͡x/ does occur to a certain extent in modern High Alemannic in place of any k in loanwords, e.g. [k͡xariˈb̥ik͡x] 'Caribbean' (?), and /k͡x/ occurs where ge- + [x], e.g. Gchnorz [k͡xno(ː)rt͡s] 'laborious work', from the verb chnorze.
The third phase, which had the most limited geographical range, saw the voiced plosives become voiceless.
- b > p
- d > t
- g > k
Of these, only the dental shift d > t universally finds its way into standard German (though with relatively many exceptions, partly due to Low and Central German influence). The other two occur in standard German only in original geminates, e.g. Rippe, Brücke vs. Dutch rib, brug "rib, bridge". For single consonants, b > p and g > k are restricted to High Alemannic German in Switzerland, and south Bavarian dialects in Austria. This shift probably began in the 8th or 9th century, after the first and second phases ceased to be productive, otherwise the resulting voiceless plosives would have shifted further to fricatives and affricates.
In those words in which an Indo-European voiceless plosive became voiced as a result of Verner's Law, phase three of the High German shift returns this to its original value (*t > d > t):
- PIE *meh₂tḗr
- > early Proto-Germanic *māþḗr (t > /θ/ by the First Germanic Consonant Shift)
- > late Proto-Germanic *mōđēr (/θ/ > /ð/ by Verner's Law)
- > West-Germanic *mōdar (/ð/ > d by West Germanic sound change)
- > Old High German muotar (d > t by the Second Germanic Consonant Shift)
- OE dōn : OHG tuon (English do, Dutch doen, Low German doon : German tun)
- OE mōdor : OHG muotar (English mother, Dutch moeder, Low German Modder, Mudder : German Mutter)
- OE rēad : OHG rōt (English red, Dutch rood, Low German root : German rot)
- OE biddan : OHG bitten or pitten (English bid, Dutch bieden, Low German bidden : German bitten, Bavarian pitten)
The combination -nd- was shifted to -nt- only in some varieties of OHG. Written OHG normally has shifted -nt- (e.g. bintan "to bind"), but in Middle High German and modern standard German the unshifted pronunciation /nd/ prevails (cf. binden). (Although in OHG both fintan and findan "to find" are encountered, these represent earlier forms *findan and *finþan, respectively; note the corresponding alternation in Old Saxon findan and fīþan. In this case, *finþan corresponds to original Proto-Germanic *finþaną while *findan is a later, specifically West Germanic, form, created by analogy with the Verner's Law alternant *fund-, as in Proto-Germanic *fundun "they found", *fundanaz "found".)
Noteworthy exceptions are modern hinter, munter and unter, for which however Middle High German preferred hinder, munder, under. (As all of these three words end in -nter, the modern unvoiced pronunciation might be caused by analogy with Winter, whose -t- stems from original Germanic /t/ unshifted before /r/.) In other cases, modern -nt- is due to the later loss of a vowel (e.g. Ente from OHG enita) or borrowing (e.g. Kante from Low German).
Other consonant changes on the way from West Germanic to Old High German are included under the heading "High German consonant shift" by some scholars that see the term as a description of the whole context, but are excluded by others who use it to describe the neatness of the threefold chain shift. Although it might be possible to see /ð/ > /d/, /ɣ/ > /ɡ/ and /v/ > /b/ as a similar group of three, both the chronology and the differing phonetic conditions under which these changes occur speak against such a grouping.
/θ/ > /d/ (Phase 4)
What is sometimes known as the fourth phase shifted the dental fricatives to plosives. This shift occurred late enough that unshifted forms are to be found in the earliest Old High German texts, and thus it can be dated to the 9th or 10th century. This shift spread much further north than the others, eventually reaching all continental West Germanic languages (hence excluding only English). It is therefore not "High German" although it is often grouped together with the other shifts as it did spread from the same area. The shift took several centuries to spread north, appearing in Dutch only during the 12th century, and in Frisian and Low German not for another century or two after that.
In early Old High German, as in Old Dutch and Old Saxon, the voiceless and voiced dental fricatives [θ] and [ð] stood in allophonic relationship (as did /f/, /v/ and /s/, /z/), with [θ] in final position and [ð] used initially and medially. The sound [ð] then became /d/, while [θ] became /t/. In Old Frisian, the voiceless fricatives were only voiced medially, and remained voiceless initially except in some pronouns and determiners, much as in Old and Modern English. Thus, modern Frisian varieties have /t/ word-initially in most words, and /d/ medially.
- early OHG thaz > classical OHG daz (English that, Icelandic það : Dutch dat, German das, West Frisian dat)
- early OHG thenken > classical OHG denken (English think : Dutch denken, German denken, West Frisian tinke)
- early OHG thegan > classical OHG degan (English thane : Dutch degen, German Degen "warrior", West Frisian teie)
- early OHG thurstag > classical OHG durstag (English thirsty : Dutch dorstig, German durstig, West Frisian toarstig, Swedish törstig)
- early OHG bruothar/bruodhar > classical OHG bruodar (English brother, Icelandic bróðir : Dutch broeder, German Bruder, West Frisian broer)
- early OHG munth > classical OHG mund (English mouth, Old Norse múðr : Dutch mond, German Mund)
- early OHG thou/thu > classical OHG dū, du (English thou, Icelandic þú : Low German dü, German du, West Frisian do)
In dialects affected by phase 4 but not by the dental variety of phase 3, that is, Low German, Central German, and Dutch, two Germanic phonemes merged: þ becomes d, but original Germanic d remains unchanged:
|original /θ/ ( > /d/ in German and Low German)||Tod||dod||death|
|original /d/ ( > /t/ in German)||tot||dod||dead|
A peculiar development took place in stems which had the onsets dw- and tw- in OHG. They merged in MHG tw- and were later joined to the large group of stems with initial zw-. Those stems therefore appear to have undergone the High German consonant shift several times, e.g. modern zwingen ("to force") < MHG twingen < OHG dwingan < Germanic *þwengan.
In 1955, Otto Höfler suggested that a change analogous to the fourth phase of the High German consonant shift may have taken place in Gothic (East Germanic) as early as the 3rd century AD, and he hypothesised that it may have spread from Gothic to High German as a result of the Visigothic migrations westward (c. 375–500 AD). This has not found wide acceptance; the modern consensus is that Höfler misinterpreted some sound substitutions of Romanic languages as Germanic, and that East Germanic shows no sign of the second consonant shift.
Most dialects of Norwegian and Swedish show a shift that is much like the one in Frisian, with /ð/ > /d/ and /θ/ > /t/. This shift reached Swedish only around the 16th century or so, as the Gustav Vasa Bible of 1541 still shows the dental fricatives (spelled ⟨th⟩). This shift may be part of the same development as in the West Germanic languages, or it may have occurred independently. Danish, which is geographically located between West Germanic and Swedish/Norwegian, would have had to experienced this shift first before it could have spread further northwards. However, Danish does not form a dialect continuum with the West Germanic languages, and the shift occurred only word-initially in it, while it retains /ð/ medially. On the other hand, Danish exhibits widespread lenition phenomena, including shifts from plosives to fricatives and further to approximants word-medially, so it's conceivable that these changes counteracted the earlier hardening of the dental fricatives that had reached Danish from the south (thus initially /ð/ > /d/, followed by lenition /d/ > /ð/), but only after these changes had propagated further north to the remaining Scandinavian dialects.
/β/ > /b/
West Germanic *ƀ (presumably pronounced [β]), which was an allophone of /b/ used in medial position, shifted to (Upper German) Old High German /b/ between two vowels, and also after /l/. Unshifted languages retained a fricative, which became /v/ between vowels and /f/ in coda position.
- OE lēof : OHG liob, liup (obs. English †lief, Dutch lief, Low German leev : German lieb)
- OE hæfen : MHG habe(ne) (English haven, Dutch haven, Low German Haven; for German Hafen see below)
- OE half : OHG halb (English half, Dutch half, Low German halv : German halb)
- OE lifer : OHG libara, lebra (English liver, Dutch lever, Low German Läver : German Leber)
- OE selfa : OHG selbo (English self, Dutch zelf, Low German sülve : German selbe)
- OE sealf : OHG salba (English salve, Dutch zalf, Low German Salv : German Salbe)
In strong verbs such as German heben 'heave' and geben 'give', the shift contributed to eliminating the [β] forms in German, but a full account of these verbs is complicated by the effects of grammatischer Wechsel by which [β] and [b] appear in alternation in different parts of the same verb in the early forms of the languages. In the case of weak verbs such as haben 'have' (cf. Dutch hebben) and leben 'live' (cf. Dutch leven), the consonant differences have an unrelated origin, being a result of the West Germanic gemination and a subsequent process of levelling.
This shift also is only partly completed in Central German, with Ripuarian and Moselle Franconian retaining a fricative pronunciation. For example: Colognian hä läv, Luxembourgish hie lieft, meaning "he lives".
/ð/ > /d/
The Proto-Germanic voiced dental fricative [ð], which was an allophone of /d/ in certain positions, became a plosive [d] in all positions throughout the West Germanic languages. Thus, it affected High German, Low German, Dutch, Frisian and Old English alike. It did not spread to Old Norse, which retained the original fricative. Because of its much wider spread, it must have occurred very early, during Northwest Germanic times, perhaps around the 2nd century.
English has partially reversed this shift through the change /dər/ > /ðər/, for example in father, mother, gather and together.
In phase 3 of the High German consonant shift, this /d/ was shifted to /t/, as described above.
/ɣ/ > /ɡ/
The West Germanic voiced velar fricative [ɣ] shifted to [ɡ] in Upper German dialects of Old High German in all positions. This change is believed to be early and complete by the 8th century at the latest. Since the existence of a /ɡ/ was necessary for the south German shift /ɡ/ > /k/, this must at least predate phase 3 of the core High German consonant shift.
The same change occurred independently in Anglo-Frisian (c. 10th century for Old English, as suggested by changing patterns of alliteration), except when preceding or following a front vowel where it had earlier undergone Anglo-Frisian palatalisation and ended up as /j/. Dutch has retained the original /ɣ/, despite the fact it is spelled with ⟨g⟩, rendering it indistinguishable in writing from its counterparts in other languages.
- Dutch goed /ɣut/ : German gut /ɡuːt/, English good //
- Dutch gisteren /ˈɣɪstərə(n)/ : German gestern /ˈɡɛstɐn/ : English yesterday //, West Frisian juster /ˈjɵstər/
The shift is only partly complete in Central German. Most Central German dialects have fricative pronunciation for ⟨g⟩ between vowels (/ʒ/, /ʝ/, /j/, /ʁ/) and in coda position (/ʃ/, /ç/, /x/). Ripuarian has /j/ word-initially, e.g. Colognian jood /joːt/ "good".
In standard German, fricative ⟨g⟩ is found in coda position in unstressed -ig (selig /ˈzeːlɪç/ "blessed" but feminine selige /ˈzeːlɪɡǝ/). One will still very frequently hear fricative ⟨g⟩ in coda position in other cases as well in standard German as pronounced by people from northern and central Germany. For example, Tag and Weg are often pronounced /tax/ and /veːç/. Compare German phonology. This pronunciation reaches as far south as Franconia, thus into Upper German areas.
/s/ > /ʃ/
High German experienced the shift /sk/ > /ʃ/ in all positions, and /s/ > /ʃ/ before another consonant in initial position (original /s/ may in fact have been apical [s̺], as OHG and MHG distinguish it from the reflex /t/ > /s/, spelled ⟨z⟩ or ⟨ȥ⟩ and presumed to be laminal [s̻]):
- German Schrift, script
- German Flasche, flask
- German spinnen (/ʃp/), spin
- German Straße (/ʃt/), street
- German Schlaf, sleep
- German Schmied, smith
- German Schnee, snow
- German Schwan, swan
Additionally /rs/ became /rʃ/, except before /t/:
The /sk/ > /ʃ/ shift occurred in most West Germanic dialects but notably not in Dutch, which instead had /sk/ > /sx/. The two other changes did not reach any further than Limburgish and some southern dialects of Low German:
- Limburgian sjpinne /ˈʃpɪnə/, sjtraot /ʃtʁɔːt/, sjrif /ʃʁɪf/
- Dutch spinnen /ˈspɪnə(n)/, straat /straːt/, schrift /sxrɪft/ (although note that Dutch /s/ is usually apical).
Other changes include a general tendency towards terminal devoicing in German and Dutch, and to a far more limited extent in English. Thus, in German and Dutch, /b/, /d/ and /ɡ/ (German), /ɣ/ (Dutch) at the end of a word are pronounced identically to /p/, /t/ and /k/ (German), /x/ (Dutch). The ⟨g⟩ in German Tag [taːk] (day) is pronounced as ck in English tack, not as ⟨g⟩ in English tag. However, this change is not High German in origin but is generally thought to have originated in Frankish, as the earliest evidence for the change appears in Old Dutch texts at a time when there was still no sign of devoicing at all in Old High German or Old Saxon.
Nevertheless, the original voiced consonants are usually represented in modern German and Dutch spelling. This is because related inflected forms, such as the plural Tage [ˈtaːɡə], have the voiced form, since here the plosive is not terminal. As a result of these inflected forms, native speakers remain aware of the underlying voiced phoneme, and spell accordingly. However, in Middle High German, these sounds were spelled differently: singular tac, plural tage.
Since, apart from þ > d, the High German consonant shift took place before the beginning of writing of Old High German in the 9th century, the dating of the various phases is an uncertain business. The estimates quoted here are mostly taken from the dtv-Atlas zur deutschen Sprache (p. 63). Different estimates appear elsewhere, for example Waterman, who asserts that the first three phases occurred fairly close together and were complete in Alemannic territory by 600, taking another two or three centuries to spread north.
Sometimes historical constellations help us; for example, the fact that Attila is called Etzel in German proves that the second phase must have been productive after the Hunnish invasion of the 5th century. The fact that many Latin loan-words are shifted in German (e.g., Latin strata > German Straße), while others are not (e.g., Latin poena > German Pein) allows us to date the sound changes before or after the likely period of borrowing. However the most useful source of chronological data is German words cited in Latin texts of the late classical and early medieval period.
Precise dating would in any case be difficult, since each shift may have begun with one word or a group of words in the speech of one locality, and gradually extended by lexical diffusion to all words with the same phonological pattern, and then over a longer period of time spread to wider geographical areas.
However, relative chronology for phases 2, 3, and 4 can easily be established by the observation that t > tz must precede d > t, which in turn must precede þ > d; otherwise words with an original þ could have undergone all three shifts and ended up as tz. By contrast, as the form kepan for "give" is attested in Old Bavarian, showing both /ɣ/ > /ɡ/ > /k/ and /β/ > /b/ > /p/, it follows that /ɣ/ > /ɡ/ and /β/ > /b/ must predate phase 3.
Alternative chronologies have been proposed. According to a theory by the controversial German linguist Theo Vennemann, the consonant shift occurred much earlier and was already completed in the early 1st century BC. On this basis, he subdivides the Germanic languages into High Germanic and Low Germanic. Apart from Vennemann, few other linguists share this view.
|Dialects and isoglosses of the Rhenish Fan
(Arranged from north to south: dialects in dark fields, isoglosses in light fields)
|Unity plural line||wi maakt||wi maken|
|Uerdingen line (Uerdingen)||ik||ich|
(Boundary: Low German — Central German)
|Ripuarian Franconian (Cologne, Bonn, Aachen)|
|Bad Honnef line
(State border NRW-RP) (Eifel-Schranke)
|West Mosel Franconian (Luxemburgish, Trier)|
|Linz line (Linz am Rhein)||tussen||zwischen|
|Bad Hönningen line||op||auf|
|East Mosel Franconian (Koblenz, Saarland)|
|Boppard line (Boppard)||Korf||Korb|
|Sankt Goar line (Sankt Goar)
|Rhenish Franconian (Pfälzisch, Frankfurt)|
|Speyer line (River Main line)
(Boundary: Central German — Upper German)
Roughly, the changes resulting from phase 1 affected Upper and Central German, as did the dental element of phase 2 (t- > z-). The other elements of phase 2 and all of phase 3 impacted only Upper German, while those changes from phase 4 affected the entire German and Dutch-speaking region (i.e. the so-called West Germanic dialect continuum). The generally accepted boundary between Central and Low German, the maken-machen line, is sometimes called the Benrath line, as it passes through the Düsseldorf suburb of Benrath, while the main boundary between Central and Upper German, the Appel-Apfel line can be called the Speyer line, as it passes near the town of Speyer, some 200 kilometers further south.
However, a precise description of the geographical extent of the changes is far more complex. Not only do the individual sound shifts within a phase vary in their distribution (phase 3, for example, partly affects the whole of Upper German and partly only the southernmost dialects within Upper German), but there are even slight variations from word to word in the distribution of the same consonant shift. For example, the ik-ich line lies further north than the maken-machen line in western Germany, coincides with it in central Germany, and lies further south at its eastern end, although both demonstrate the same shift /k/ > /x/.
The subdivision of West Central German into a series of dialects, according to the differing extent of the phase 1 shifts, is particularly pronounced. It known as the Rhenish fan (German: Rheinischer Fächer, Dutch: Rijnlandse waaier) because on the map of dialect boundaries, the lines form a fan shape. Here, no fewer than eight isoglosses run roughly West to East and partially merge into a simpler system of boundaries in East Central German. The table on the right lists the isoglosses (bold) and the main resulting dialects (italics), arranged from north to south.
Some of the consonant shifts resulting from the second and third phases appear also to be observable in Lombardic, the early mediaeval Germanic language of northern Italy, which is preserved in runic fragments of the late 6th and early 7th centuries. However, the Lombardic records are not sufficient to allow a complete taxonomy of the language. It is therefore uncertain whether the language experienced the full shift or merely sporadic reflexes, but b > p is clearly attested. This may mean that the shift began in Italy, or that it spread southwards as well as northwards. Ernst Schwarz and others have suggested that the shift occurred in German as a result of contacts with Lombardic. If, in fact, there is a relationship here, the evidence of Lombardic would force us to conclude that the third phase must have begun by the late 6th century, rather earlier than most estimates, but this would not necessarily require that it had spread to German so early.
If, as some scholars believe, Lombardic was an East Germanic language and not part of the German language dialect continuum, it is possible that parallel shifts took place independently in German and Lombardic. However the extant words in Lombardic show clear relations to Bavarian. Therefore Werner Betz and others prefer to treat Lombardic as an Old High German dialect. There were close connections between Lombards and Proto-Bavarians. For example, the Lombards settled in 'Tullner Feld' (about 50 km west of Vienna) until 568, but it is evident that not all Lombards went to Italy after that time; the rest seem to have become part of the then newly formed Bavarian groups.
When Columban came to the Alamanni at Lake Constance shortly after 600, he made barrels burst, called cupa (English cup, German Kufe), according to Jonas of Bobbio (before 650) in Lombardy. This shows that in the time of Columban the shift from p to f had occurred neither in Alemannic nor in Lombardic. But the Edictus Rothari (643; surviving manuscript after 650) attests the forms grapworf ('throwing a corpse out of the grave', German Wurf and Grab), marhworf ('a horse', OHG marh, 'throws the rider off'), and many similar shifted examples. So it is best to see the consonant shift as a common Lombardic—Bavarian—Alemannic shift between 620 and 640, when these tribes had plenty of contact.
As an example of the effects of the shift one may compare the following texts from the later Middle Ages, on the left a Middle Low German citation from the Sachsenspiegel (1220), which does not show the shift, and on the right the same text from the Middle High German Deutschenspiegel (1274), which shows the shifted consonants; both are standard legal texts of the period.
|Sachsenspiegel (II,45,3)||Deutschenspiegel (Landrecht 283)|
|De man is ok vormunde sines wives,
to hant alse se eme getruwet is.
Dat wif is ok des mannes notinne
to hant alse se in sin bedde trit,
na des mannes dode is se ledich van des mannes rechte.
|Der man ist auch vormunt sînes wîbes
zehant als si im getriuwet ist.
Daz wîp ist auch des mannes genôzinne
zehant als si an sîn bette trit
nâch des mannes rechte.
- Sachsenspiegel: "The man is also guardian of his wife / as soon as she is married to him. / The wife is also the man's companion / as soon as she goes to his bed / After the man's death she is free of the man's rights."
- Deutschenspiegel: "The man is also guardian of his wife / as soon as she is married to him. / The wife is also the man's companion / as soon as she goes to his bed. / according to a man's rights.")
Irregularities in modern Standard German
The High German consonant shift, at least as far as the core group of changes is concerned, is an example of an exceptionless sound change and was frequently cited as such by the Neogrammarians. Modern standard German is a compromise form between East Central German and northern Upper German, mainly based on the former but with the consonant pattern of the latter. However, individual words from all German dialects and varieties have found their way into the standard. When a German word contains unshifted consonants, it is usually considered to be a loanword from either Low German or, less often, Central German. Either the shifted form has become obsolete, as in:
- Hafen "harbor", from Low German (15th century), replacing Middle High German habe(ne);
- Pacht "lease", from West Central German, replacing Middle High German pfaht;
or the two forms are retained as doublets, as in:
- Wappen "coat of arms", from Low German, alongside native Waffe "weapon";
- sich kloppen "to fight", from either Low German or Central German, alongside native klopfen "to knock".
Many unshifted words are borrowed from Low German:
- Hafer "oat" (vs. Swiss, Austrian Haber); Lippe "lip" (vs. Lefze "animal lip"); Pegel "water level"; Pickel "pimple"
However, the majority of unshifted words in German are loaned from Latin, Romance, English or Slavic:
- Paar "pair, couple" (← Medieval Latin pār), Peitsche "whip" (← Old Sorbian/Czech bič).
Other ostensible irregularities in the sound shift, which we may notice in modern Standard German, are usually clarified by checking the etymology of an individual word. Possible reasons include the following:
- Onomatopoeia (cf. German babbeln ~ English to babble, which were probably formed independently from each other);
- Later developments after the High German sound shift, especially the elimination of some unstressed vowels. For example, Dutch kerk and German Kirche ("church") seem to indicate an irregular shift -rk- > -rch- (compare regular German Mark, stark, Werk). However, Kirche stems from OHG kirihha (Greek kyrikē) with a vowel after /r/ (which makes the shift perfectly regular). Similarly, the shifted form Milch ("milk") was miluh or milih in OHG, but the unshifted melken ("to milk") never had a vowel after /l/.
- Certain irregular variations between voiced an unvoiced consonants, especially [d] and [t], in Middle High German (active several centuries after the shift). Thereby OHG dūsunt became modern tausend ("thousand"), as if it had been shifted twice. Contrariwise, and more often, the shift was apparently undone in some words: PG *dunstaz > OHG tunst > back again to modern Dunst ("dust"). In this latter case, it is sometimes difficult to determine whether re-voicing was a native Middle High German development or from Low German influence. (Often, both factors have collaborated to establish the voiced variant.)
- Glottalic theory
- Low Dietsch dialects
- The Tuscan gorgia, a similar evolution differentiating the Tuscan dialects from Standard Italian.
- See also Fausto Cercignani, The Consonants of German: Synchrony and Diachrony, Milano, Cisalpino, 1979.
- Scholars who restrict the term "High German Consonant Shift" to the core group include Braune/Reiffenstein, Chambers & Wilkie, von Kienle, Wright (1907), and Voyles (1992). Those who include other changes as part of the shift or who treat them as connected with it include Penzl (1975), dtv-Atlas, Keller, Moser/Wellmann/Wolf, and Wells.
- Scholars who make a two-fold analysis include Bach, Braune/Reiffenstein, Eggers, Gerh. Wolff, Keller, Moser/Wellmann/Wolf, Penzl (1971 & 1975), Russ, Sonderegger (1979), von Kienle, Voyles (1992), and Wright (1907). Scholars who distinguish three phases include Chambers & Wilkie, dtv-Atlas, Waterman, and Wells.
- See the definition of "high" in the Oxford English Dictionary (Concise Edition): "... situated far above ground, sealevel, etc; upper, inland, as ... High German".
- Recent work suggests that future scholars may analyse German dialects in new ways, which will have consequences also for the understanding of the shift. Schwerdt (2000) has argued that the name 'High German consonant shift' is misleading and perhaps even inappropriate, as it does not adequately reflect the areal discrepancies of the individual changes undergone by the affected West Germanic dialects.
- Concise Oxford Dictionary of English Etymology, TF Hoad (Ed)
- As a general rule, Low German, Dutch, and German have all undergone final-obstruent devoicing so that the modern reflexes are all pronounced with final /t/ regardless of spelling.
- Manlio & Michele Cortelazzo, L'etimologico minore 2003, p. 929f.
- Otto Höfler, Die zweite Lautverschiebung bei Ostgermanen und Westgermanen, Beiträge zur Geschichte der deutschen Sprache und Literatur 77 (Tübingen 1955)
- B. Mees, The Bergakker inscription and the beginnings of Dutch, in: Amsterdamer beiträge zur älteren Germanistik: Band 56- 2002, edited by Erika Langbroek, Annelies Roeleveld, Paula Vermeyden, Arend Quak, Published by Rodopi, 2002, ISBN 90-420-1579-9, ISBN 978-90-420-1579-1
- Vennemann, Theo (1994): "Dating the division between High and Low Germanic. A summary of arguments". In: Mørck, E./Swan, T./Jansen, O.J. (eds.): Language change and language structure. Older Germanic languages in a comparative perspective. Berlin/New York: 271–303.
- The table of isoglosses is adapted from Rheinischer Fächer on the German Wikipedia.
- Rheinischer Fächer – Karte des Landschaftsverband Rheinland Archived February 15, 2009 at the Wayback Machine
- The sample texts have been copied over from Lautverschiebung on the German Wikipedia.
- Dates of sound shifts are taken from the dtv-Atlas zur deutschen Sprache (p. 63).
- Waterman, John C. (1991) . A History of the German Language (Revised edition 1976 ed.). Long Grove IL: Waveland Press Inc. (by arrangement with University of Washington Press). p. 284. ISBN 0-88133-590-8.
- Friedrich Kluge (revised Elmar Seebold), Etymologisches Wörterbuch der deutschen Sprache (The Etymological Dictionary of the German Language), 24th edition, 2002.
- Paul/Wiehl/Grosse, Mittelhochdeutsche Grammatik (Middle-High German Grammar), 23rd ed, Tübingen 1989, 114–22.
- Fausto Cercignani, The Consonants of German: Synchrony and Diachrony, Milano, Cisalpino, 1979.
- Philippe Marcq & Thérèse Robin, Linguistique historique de l'allemand, Paris, 1997.
- Robert S. P. Beekes, Vergelijkende taalwetenschap, Utrecht, 1990.
- Schwerdt, Judith (2000). Die 2. Lautverschiebung: Wege zu ihrer Erforschung. Heidelberg: Carl Winter. ISBN 3-8253-1018-3. | https://en.wikipedia.org/wiki/High_German_consonant_shift |
4.125 | 2 Answers | Add Yours
Coelom can be called as the body cavity running throughout the length of the trunk in some organisms. Coelom originates by the splitting of the mesoderm (the second layer found in three-layered organisms or the triploblasts) during early embryonic stages and then later exists inner to it. Filled with coelomic fluids, it causes separation of the gut from the body wall and mainly helps in absorbing shock, improving circulation and providing rigidity. The organisms that have a coelom have a complex structure and higher in taxonomic order, and are known as Coelomates. Those organisms which lack a coelom are usually primitive in origin and are called as Acoelomates. In certain species, coelom existed in the ancestor but got lost during a certain stage in the course of evolution or still exists as false or pseudo-coelom. Human beings are Eucoelomates and that means they have a true coelom. Lying inner to the mesodermal wall, coelom surrounds the body track of humans and is divided into three parts. Where it surrounds the heart, it is called as pericardial cavity. Similarly, coelom surrounding the lungs is pleural cavity and the one surrounding digestive organs is called as peritoneal cavity.
Also, describe two adaptive benefits of having a coelom.
We’ve answered 302,828 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/what-coelom-where-coelom-human-418223 |
4.125 | |Part of a series on|
The cardinal virtues comprise a quartet set of virtues recognized in the writings of Classical Antiquity and, along with the theological virtues, also in Christian tradition. They consist of the following qualities:
- Prudence (φρόνησις, phronēsis; Latin: prudentia): also described as wisdom, the ability to judge between actions with regard to appropriate actions at a given time
- Justice (δικαιοσύνη, dikaiosynē; Latin: iustitia): also considered as fairness, the most extensive and most important virtue
- Temperance (σωφροσύνη, sōphrosynē; Latin: temperantia): also known as restraint, the practice of self-control, abstention, and moderation tempering the appetition
- Courage (ἀνδρεία, andreia; Latin: fortitudo): also termed fortitude, forbearance, strength, endurance, and the ability to confront fear, uncertainty, and intimidation
These virtues derive initially from Plato's scheme, discussed in Republic Book IV, 426-435 (and see Protagoras 330b, which also includes piety (hosiotes)). Cicero expanded on them, and Saint Ambrose, Augustine of Hippo, and Thomas Aquinas adapted them.
The term "cardinal" comes from the Latin cardo (hinge); the cardinal virtues are so called because they are regarded as the basic virtues required for a virtuous life. They also relate to the Quadrivium.
In Classical Antiquity
The four cardinal virtues appear as a group (sometimes included in larger lists) long before they are later given this title.
Plato identified the four cardinal virtues with the classes of the city described in The Republic, and with the faculties of man. Plato narrates a discussion of the character of a good city where the following is agreed upon. “Clearly, then, it will be wise, brave, temperate [literally: healthy-minded], and just.” (427e; see also 435b) Temperance was common to all classes, but primarily associated with the producing classes, the farmers and craftsmen, and with the animal appetites, to whom no special virtue was assigned; fortitude was assigned to the warrior class and to the spirited element in man; prudence to the rulers and to reason. Justice stands outside the class system and divisions of man, and rules the proper relationship among the three of them.
In Aristotle's Rhetoric we read: “The forms of Virtue are justice, courage, temperance, magnificence, magnanimity, liberality, gentleness, prudence, wisdom.” (Rhetoric 1366b1)
The Roman philosopher and statesman Cicero (106-43 BC), like Plato, limits the list to four virtues:
“Virtue may be defined as a habit of mind (animi) in harmony with reason and the order of nature. It has four parts: wisdom (prudentiam), justice, courage, temperance.” (De Inventione, II, LIII )
Cicero discusses these further in De Officiis (I, V and following).
The cardinal virtues are listed in the Bible. The deuterocanonical book Wisdom of Solomon 8:7 reads, "She [Wisdom] teaches temperance, and prudence, and justice, and fortitude, which are such things as men can have nothing more profitable in life."
They are also found in the Biblical apocrypha. 4 Maccabees 1:18-19 relates: “Now the kinds of wisdom are right judgment, justice, courage, and self-control. Right judgment is supreme over all of these since by means of it reason rules over the emotions.”
Catholic moral philosophy drew from all of these sources when developing its reflections on the virtues.
In Christian tradition
St. Ambrose (330s-397 AD) was the first to use the expression “cardinal virtues.” “And we know that there are four cardinal virtues temperance, justice, prudence, fortitude.” (Commentary on Luke, V, 62)
St. Augustine, discussing the morals of the church, described them:
"For these four virtues (would that all felt their influence in their minds as they have their names in their mouths!), I should have no hesitation in defining them: that temperance is love giving itself entirely to that which is loved; fortitude is love readily bearing all things for the sake of the loved object; justice is love serving only the loved object, and therefore ruling rightly; prudence is love distinguishing with sagacity between what hinders it and what helps it." (De moribus eccl., Chap. xv)
Relationship to the theological virtues
The "cardinal" virtues are not the same as the three theological virtues: faith, hope, and charity / love (see 1 Corinthians 13). Together, they comprise what is known as the seven virtues, also known as the theological virtues. While history suggests that the first four date back to Greek philosophers and were applicable to all people seeking to live moral lives, the theological virtues appear to be specific to Christians as written by Paul in The New Testament.
In the Book of Genesis (28:10-22) Jacob describes his vision of a ladder or stairway leading to heaven. In oral tradition, the three principal rungs on the ladder were denominated empathy, Hope and Love. (The King James Version of the Bible uses "charity," but "charity" was derived from caritas, or "love.") These three are mentioned in 1 Corinthians 13: And now these three remain: faith, hope and love. But the greatest of these is love. Because of this reference, the seven attributes are sometimes grouped as four cardinal virtues (prudence, temperance, fortitude, justice) and three heavenly graces (faith, hope, charity).
Efforts to relate the cardinal and theological virtues differ. St. Augustine sees faith as coming under justice. Beginning with a wry comment about the moral mischief of pagan deities, he writes:
"They [the pagans] have made Virtue also a goddess, which, indeed, if it could be a goddess, had been preferable to many. And now, because it is not a goddess, but a gift of God, let it be obtained by prayer from Him, by whom alone it can be given, and the whole crowd of false gods vanishes. For as much as they have thought proper to distribute virtue into four divisions--prudence, justice, fortitude, and temperance--and as each of these divisions has its own virtues, faith is among the parts of justice, and has the chief place with as many of us as know what that saying means, ‘The just shall live by faith.’" (City of God, IV, 20)
Replacement of the virtues
Jesuit scholars, Daniel Harrington and James Keenan, find the four cardinal virtues in need of replacement by new ones for the Church. The reasons they give are:
- "contemporary writers repeatedly express dissatisfaction with the insufficiency of justice".
- "the modern era insists that moral dilemmas are not based on the simple opposition of good and evil but, more frequently, on the clash of goods – thus a constellation of heuristic guides that already resolves the priority of one virtue over another by which a preconceived hierarchical structure preempts realism"
- "the primary identity of being human is not as an individual with powers needing perfection, but as a relational rational being whose modes of relationality need to be made virtuous or to be rightly realized"
In replacement, they suggest the three Bible virtues of: Faith, Hope, and Love. Additionally, they add: be humble, be hospitable, be merciful, be faithful, reconcile, be vigilant, and be reliable. This totals to 10 new virtues to replace the four cardinal virtues.
Depictions of the virtues
The Cardinal Virtues are often depicted as female allegorical figures and were a popular subject for funerary sculpture. The attributes and names of these figures may vary according to local tradition.
In many churches and artwork the Cardinal Virtues are depicted with symbolic items:
- Justice – sword, balance and scales, and a crown
- Temperance – wheel, bridle and reins, vegetables and fish, cup, water and wine in two jugs
- Fortitude – armor, club, with a lion, palm, tower, yoke, broken column
- Prudence – book, scroll, mirror (occasionally attacked by a serpent)
|Iustitia (justice)||Fortitudo (fortitude)||Prudentia (prudence)||Temperantia (temperance)|
Allegories of the virtues on the facade of the Gesuati church in Venice (1737)
Allegories of the virtues on the facade of La Rochelle city hall
- "Cardinal Virtues of Plato, Augustine and Confucius". theplatonist.com.
- Summa Theologica II(I).61
- Harper, Douglas. "cardinal". Online Etymology Dictionary.
- "Cicero: de Inventione II". thelatinlibrary.com.
- Harrington, Daniel; Keenan, James (2010). Paul and Virtue Ethics. Lanham, MD: Rowman and Littlefield Publishers. p. 9.
- Harrington, Daniel; Keenan, James (2010). Paul and Virtue Ethics. Lanham, MD: Rowman and Littlefield Publishers. pp. 125–126.
- St. Ambrose, "On the Duties of the Clergy" Book 1, chapter 24 (paragraph 115) and following
- St. Augustine, "Of the Morals of the Catholic Church"
|Wikisource has the text of the 1911 Encyclopædia Britannica article Cardinal Virtues.|
- John Rickaby (1913). "Cardinal Virtues". Catholic Encyclopedia. New York: Robert Appleton Company.
- Seven Virtues (atheism.com)
- Cardinal Virtues according to Aquinas (New Advent) | https://en.wikipedia.org/wiki/Cardinal_virtues |
4.125 | Japanese History/World War II< Japanese History
- 1 Japanese Preparations for War
- 2 Declaration of War
- 3 Initial Japanese Successes
- 4 The Tide Turns
- 5 End of World War II
- 6 Japan's Home Front
Japanese Preparations for WarEdit
In an effort to discourage Japanese militarism, Western powers including Australia, the United States, Britain, and the Dutch government in exile, which controlled the petroleum-rich Netherlands East Indies, stopped selling iron ore, steel and oil to Japan, denying it the raw materials needed to continue its aggressive activities in China and French Indochina. In Japan, the government and nationalists viewed these embargos as acts of aggression; imported oil made up about 80% of domestic consumption, without which Japan's economy, let alone its military, would grind to a halt. The Japanese media, influenced by military propagandists, began to refer to the embargoes as the "ABCD ("American-British-Chinese-Dutch") encirclement" or "ABCD line".
Faced with a choice between economic collapse and withdrawal from its recent conquests (with its attendant loss of face), the Japanese Imperial General Headquarters began planning for a war with the western powers in April or May 1941.
The key objective was for the Southern Expeditionary Army Group to seize economic resources under the control of the United Kingdom and the Netherlands, most notably those in Malaya and the Netherlands East Indies, known as the "Southern Plan". It was also decided—because of the close relationship between the UK and United States, and the (mistaken) belief the US would inevitably become involved—Japan would also require an "eastern plan".
The eastern plan required
- initial attacks on the US Pacific Fleet at Pearl Harbor, Hawaii, with carrier-based aircraft of the Combined Fleet, and
- following this attack with
- seizure of the Philippines, and
- cutting the U.S. lines of communication by seizing Guam and Wake Island.
The southern plans called for:
- attacking Malaya and Hong Kong, and
- following with attacks against
- the Bismarck Archipelago,
- Java, and
- isolating Australia and New Zealand
Following completion of these objectives, the strategy would turn defensive, primarily holding their newly acquired territory while hoping for a negotiated peace.
By November these plans were essentially complete, and were modified only slightly over the next month. Japanese military planners' expectation of success rested on the United Kingdom and the Soviet Union being unable to effectively respond to a Japanese attack because of the threat posed to each by Germany; the Soviet Union was even seen as unlikely to commence hostilities. There is no evidence the Japanese planned to defeat the United States; the alternative would be negotiating for peace after their initial victories. In fact, the Imperial GHQ noted, should acceptable negotiations be reached with the Americans, the attacks were to be canceled—even if the order to attack had already been given.
They also planned, should the U.S. transfer its Pacific Fleet to the Philippines, to intercept and attack this fleet en route with the Combined Fleet, in keeping with all Japanese Navy prewar planning and doctrine.
Should the United States or Britain attack first, the plans further stipulated the military were to hold their positions and wait for orders from GHQ. The planners noted attacking the Philippines and Malaya still had possibilities of success, even in the worst case of a combined preemptive attack including Soviet forces.
Declaration of WarEdit
On the 22 June 1941 the Wehrmacht of Nazi Germany invaded Soviet Russia in "Operation Barbarossa". The Japanese had also signed a neutrality pact with the Soviet Union, and there was little chance that the Soviets would attack Japan (although the Soviets later broke the pact) This freed the Japanese Armed Forces to expand southwards.
On the 7th of December 1941, The Japanese attacked Pearl Harbor. Their Mission to destroy the American Pacific Fleet. The idea was to emulate the Japanese attack on Port Arthur. They succeeded in sinking many American battleships however, crucially they failed to destroy the repair houses and the carriers of The American Pacific Fleet. This meant much of the American fleet was back in action relatively quickly. The carriers were much more decisive a weapon in the war as a whole, eventually rendering battleships obsolete. This can be seen with the mega-battle ships of the Japanese. These ships had massive guns, however they were destroyed by planes, mainly launched from carriers. Finally many of the battleships of Pearl Harbor were to be scuttled, and the Japanese attack may have just saved them from the scrapyards.
Initial Japanese SuccessesEdit
British, Australian and Dutch forces, already drained of personnel and material by two years of war with Germany, and heavily committed in the Middle East, North Africa and elsewhere, were unable to provide much more than token resistance to the battle-hardened Japanese. The Allies suffered many disastrous defeats in the first six months of the war. Two major British warships, HMS Repulse and HMS Prince of Wales were sunk by a Japanese air attack off Malaya on 10 December 1941.
Thailand, with its territory already serving as a springboard for the Malayan campaign, surrendered within 24 hours of the Japanese invasion. The government of Thailand formally allied itself with Japan on 21 December.
Hong Kong was attacked on 8 December and fell on 25 December 1941, with Canadian forces and the Royal Hong Kong Volunteers playing an important part in the defense. U.S. bases on Guam and Wake Island were lost at around the same time.
Following the 1 January 1942 Declaration by United Nations (the first official use of the term United Nations), the Allied governments appointed the British General Sir Archibald Wavell to the American-British-Dutch-Australian Command (ABDACOM), a supreme command for Allied forces in South East Asia. This gave Wavell nominal control of a huge force, albeit thinly-spread over an area from Burma to the Philippines to northern Australia. Other areas, including India, Hawaii and the rest of Australia remained under separate local commands. On 15 January Wavell moved to Bandung in Java to assume control of ABDA Command (ABDACOM). In January, Japan invaded Burma, the Dutch East Indies, New Guinea, the Solomon Islands and captured Manila, Kuala Lumpur and Rabaul. After being driven out of Malaya, Allied forces in Singapore attempted to resist the Japanese during the Battle of Singapore but surrendered to the Japanese on 15 February 1942; about 130,000 Indian, British, Australian and Dutch personnel became prisoners of war. The pace of Japanese conquest was rapid: Bali and Timor also fell in February. The rapid collapse of Allied resistance had left the "ABDA area" split in two. Wavell resigned from ABDACOM on 25 February, handing control of the ABDA Area to local commanders and returning to the post of Commander-in-Chief, India.
Meanwhile, Japanese aircraft had all but eliminated Allied air power in South-East Asia and were making attacks on northern Australia, beginning with a psychologically devastating (but militarily insignificant) attack on the city of Darwin on 19 February, which killed at least 243 people.
At the Battle of the Java Sea in late February and early March, the Japanese Navy inflicted a resounding defeat on the main ABDA naval force, under Admiral Karel Doorman. The Netherlands East Indies campaign subsequently ended with the surrender of Allied forces on Java.
In March and April, a raid into the Indian Ocean by a powerful Imperial Japanese Navy aircraft carrier force resulted in a wave of major air raids against Ceylon and the sinking of a British aircraft carrier, HMS Hermes as well as other Allied ships and driving the British fleet out of the Indian Ocean. This paved the way for a Japanese assault on Burma and India.
The British, under intense pressure, made a fighting retreat from Rangoon to the Indo-Burmese border. This cut the Burma Road which was the western Allies' supply line to the Chinese Nationalists. Cooperation between the Chinese Nationalists and the Communists had waned from its zenith at the Battle of Wuhan, and the relationship between the two had gone sour as both attempted to expand their area of operations in occupied territories. Most of the Nationalist guerrilla areas were eventually overtaken by the Communists. On the other hand, some Nationalist units were deployed to blockade the Communists and not the Japanese. Furthermore, many of the forces of the Chinese Nationalists were warlords allied to Chiang Kai-Shek, but not directly under his command. "Of the 1,200,000 troops under Chiang's control, only 650,000 were directly controlled by his generals, and another 550,000 controlled by warlords who claimed loyalty to his government; the strongest force was the Szechuan army of 320,000 men. The defeat of this army would do much to end Chiang's power." The Japanese exploited this lack of unity to press ahead in their offensives.
Filipino and U.S. forces resisted in the Philippines until 8 May 1942, when more than 80,000 soldiers were ordered to surrender. By this time, General Douglas MacArthur, who had been appointed Supreme Allied Commander South West Pacific, had retreated to the safer confines of Australia. The U.S. Navy, under Admiral Chester Nimitz, had responsibility for the rest of the Pacific Ocean. This divided command had unfortunate consequences for the commerce war, and consequently, the war itself.
The Tide TurnsEdit
The Battle of the Coral SeaEdit
By mid-1942, the Japanese Combined Fleet found itself holding a vast area, even though it lacked the aircraft carriers, aircraft, and aircrew to defend it, and the freighters, tankers, and destroyers necessary to sustain it. Moreover, Fleet doctrine was inadequate to execute the proposed "barrier" defence. Instead, they decided on additional attacks in both the south and central Pacific. While Yamamoto had used the element of surprise at Pearl Harbor, Allied codebreakers now turned the tables. They discovered an attack against Port Moresby, New Guinea, was imminent with intent to invade and conquer all of New Guinea. If Port Moresby fell, it would give Japan control of the seas to the immediate north of Australia. Nimitz rushed the carrier USS Lexington, under Admiral Fletcher, to join USS Yorktown and an American-Australian task force, with orders to contest the Japanese advance. The resulting Battle of the Coral Sea was the first naval battle in which ships involved never sighted each other and aircraft were solely used to attack opposing forces. Although Lexington was sunk and Yorktown seriously damaged, the Japanese lost the aircraft carrier Shōhō, suffered extensive damage to the aircraft carrier Shōkaku and heavy losses to the air wing of warship Japanese aircraft carrier Zuikaku (both missed the operation against Midway the following month), and saw the Moresby invasion force turn back. Even though Allied losses were heavier than Japanese, the Japanese attack on Port Moresby was thwarted and their invasion forces turned back, yielding a strategic victory for the Allies. Moreover, Japan lacked the capacity to replace losses in ships, planes and trained pilots. Destruction of U.S. carriers was Yamamoto's main objective, and he planned an operation to lure them to battle. After Coral Sea, he had four fleet carriers operational—Sōryū, Kaga, Akagi and Hiryū —and believed Nimitz had a maximum of two—USS Enterprise and USS Hornet. USS Saratoga CV-3 2 was out of action, undergoing repair after a torpedo attack, while Yorktown sailed after three days' work to repair her flight deck and make essential repairs, with civilian work crews still aboard.
The Battle of MidwayEdit
A large Japanese force was sent north to attack the Aleutian Islands, off Alaska. The next stage of Yamamoto's plan called for the capture of Midway Atoll, which would give him an opportunity to destroy Nimitz's remaining carriers; afterward, it would be turned into a major airbase, giving Japan control of the central Pacific. In May, Allied codebreakers discovered his intentions. Nagumo was again in tactical command but was focused on the invasion of Midway; Yamamoto's complex plan had no provision for intervention by Nimitz before the Japanese expected him. Planned surveillance of the U.S. fleet by long range seaplane did not happen (as a result of an abortive identical operation in March), so U.S. carriers were able to proceed to a flanking position on the approaching Japanese fleet without being detected. Nagumo had 272 planes operating from his four carriers, the U.S. 348 (of which 115 were land-based).
As anticipated by U.S. commanders, the Japanese fleet arrived off Midway on 4 June and was spotted by PBY patrol aircraft. Nagumo executed a first strike against Midway, while Fletcher launched his aircraft, bound for Nagumo's carriers. At 09:20 the first U.S carrier aircraft arrived, TBD Devastator torpedo bombers from Hornet, but their attacks were poorly coordinated and ineffectual; they failed to score a single hit, and Zero fighters shot down all 15. At 09:35, 15 TBDs from Enterprise skimmed in over the water; 14 were shot down by Zeroes. Fletcher's attacks had been disorganized, yet succeeded in distracting Nagumo's defensive fighters. When U.S. dive bombers arrived, the Zeros could not offer any protection. In addition, Nagumo's four carriers had drifted out of formation, reducing the concentration of their anti-aircraft fire. His most-criticized error was twice changing his arming orders: he first held aircraft for shipping attack as a hedge against discovery of U.S. carriers, changed this based on reports an additional strike was needed against Midway, then again after sighting Yorktown, wasting time and leaving his hangar decks crowded with refueling and rearming aircraft, and ordnance stowed outside the magazines. Yamamoto's dispositions, which left Nagumo with inadequate reconnaissance to detect (and therefore attack) Fletcher before he launched, are often ignored.
When SBD Dauntlesses from Enterprise and Yorktown appeared at an altitude of 10,000 ft, the Zeroes at sea level were unable to respond before the bombers pushed over. They scored a small number of significant hits; Sōryū, Kaga, and Akagi all caught fire. Hiryū survived this wave of attacks and launched an attack against the American carriers which caused severe damage to Yorktown (which was later finished off by a Japanese submarine). A second attack from the U.S. carriers a few hours later found and destroyed Hiryū. Yamamoto had four additional small carriers, assigned to his scattered surface forces, all too slow to keep up with the Kido Butai and therefore never in action. Yamamoto's enormous superiority in gun power was irrelevant as the U.S. had air superiority at Midway and could refuse a surface gunfight (which, by remarkable good fortune, Spruance moved to avoid, based on a faulty submarine report); Yamamoto's flawed dispositions had made closing to engage after dark on 4 June impossible. Midway was a decisive victory for the U.S. Navy and the end of the high point in Japanese aspirations in the Pacific. After merely a year the Japanese had reached their high point in the Pacific, but the process of recapturing the land the Japanese had quickly conquered, was to be no easy task.
New Guinea and the SolomonsEdit
Japanese land forces continued to advance in the Solomon Islands and New Guinea. From July 1942, a few Australian reserve battalions, many of them very young and untrained, fought a stubborn rearguard action in New Guinea, against a Japanese advance along the Kokoda Track, towards Port Moresby, over the rugged Owen Stanley Ranges. The militia, worn out and severely depleted by casualties, were relieved in late August by regular troops from the Second Australian Imperial Force, returning from action in the Mediterranean theater.
US Marines rest in the field during the Guadalcanal campaign in November 1942 In early September 1942 Japanese marines attacked a strategic Royal Australian Air Force base at Milne Bay, near the eastern tip of New Guinea. They were beaten back by the Australian Army, which inflicted the first outright defeat on Japanese land forces since 1939.
At the same time as major battles raged in New Guinea, Allied forces identified a Japanese airfield under construction at Guadalcanal. In August 16,000 Allied infantry—primarily US Marines—made an amphibious landing, to capture the airfield. Japanese and Allied forces occupied various parts of the island. Over the following six months, both sides fed resources into an escalating battle of attrition on the island, at sea, and in the sky. Most of the Japanese aircraft in the South Pacific were drafted into the Japanese defence of Guadalcanal, facing Allied air forces based at Henderson Field. Japanese ground forces launched attacks on US positions around Henderson Field, suffering high casualties. These offensives were resupplied by Japanese convoys known to the Allies as the "Tokyo Express", which often faced night battles with the Allied navies, and expended destroyers IJN could ill-afford to lose. Later fleet battles involving heavier ships and even daytime carrier battles resulted in a stretch of water near Guadalcanal becoming known as "Ironbottom Sound", from the severe losses to both sides. However, only the US Navy could quickly replace and repair its losses. The Allies were victorious on Guadalcanal in February 1943.
End of World War IIEdit
Emperor Hirohito surrendered on national radio six days after the nuclear bombing of Nagasaki. Japan surrendered due to the 2 Atomic bombs and due to the Soviet Declaration of War caused them to surrender, or at least contributed to it.
For fear that if Japan accepted Unconditional Surrender that the monarchy would be deposed, Japan had been attempting to get the USSR to mediate a peace deal between the USA and Japan. However the USSR delayed and decisions and had been discussing with the other allies a possible Soviet declaration of war against Japan. After defeating Germany many Soviet troops were moved to East Asia where they invaded Manchuria. This may have been a large reason for the surrender of Japan.
Many people wanted to keep fighting the war however the Showa emperor broadcast an order to surrender. Many holdouts have also been found, some as late as 1974. This shows the Japanese ideal that was around that the Japanese never surrender.
The war officially ended in 1951, with the Treaty of San Francisco, which was signed by 48 nations.
Japan's Home FrontEdit
Strict rationing was imposed as Japan depended on many imports for its food.
Bombing of JapanEdit
The United States strategic bombing of Japan took place between 1942 and 1945. In the last seven months of the campaign, a change to firebombing tactics resulted in great destruction of 67 Japanese cities, as many as 500,000 Japanese deaths and some 5 million more made homeless. Emperor Hirohito's viewing of the destroyed areas of Tokyo in March 1945, is said to have been the beginning of his personal involvement in the peace process, culminating in Japan's surrender five months later.
Women were not utilised, and were not drafted into industry. Rather they were expected to have good families.
Plan for a Last DefenceEdit
Lack of weapons meant that people were issued with bamboo sticks. | https://en.m.wikibooks.org/wiki/Japanese_History/World_War_II |
4.28125 | |Part of a series on|
|Jews and Judaism|
The Babylonian captivity or Babylonian exile is the period in Jewish history during which a number of Judahites of the ancient Kingdom of Judah were captives in Babylonia. After the Battle of Carchemish in 605 BCE, Nebuchadnezzar, the king of Babylon, besieged Jerusalem, resulting in tribute being paid by King Jehoiakim. Jehoiakim refused to pay tribute in Nebuchadnezzar's fourth year, which led to another siege in Nebuchadnezzar's seventh year, culminating with the death of Jehoiakim and the exile of King Jeconiah, his court and many others; Jeconiah's successor Zedekiah and others were exiled in Nebuchadnezzar's eighteenth year; a later deportation occurred in Nebuchadnezzar's twenty-third year. The dates, numbers of deportations, and numbers of deportees given in the biblical accounts vary. These deportations are dated to 597 BCE for the first, with others dated at 587/586 BCE, and 582/581 BCE respectively.
After the fall of Babylon to the Persian king Cyrus the Great in 539 BCE, exiled Jews began to return to the land of Judah. According to the biblical book of Ezra, construction of a second temple in Jerusalem began at this time. All these events are considered significant in Jewish history and culture, and had a far-reaching impact on the development of Judaism.
Archaeological studies have revealed that not all of the population of Judah was deported, and that, although Jerusalem was utterly destroyed, other parts of Judah continued to be inhabited during the period of the exile. The return of the exiles was a gradual process rather than a single event, and many of the deportees or their descendants did not return.
Biblical accounts of the exile
In the late 7th century BCE, the kingdom of Judah was a client state of the Assyrian empire. In the last decades of the century, Assyria was overthrown by Babylon, an Assyrian province. Egypt, fearing the sudden rise of the Neo-Babylonian empire, seized control of Assyrian territory up to the Euphrates river in Syria, but Babylon counter-attacked. In the process Josiah, the king of Judah, was killed in a battle with the Egyptians at the Battle of Megiddo (609 BC).
After the defeat of Pharaoh Necho's army by the Babylonians at Carchemish in 605 BCE, Jehoiakim began paying tribute to Nebuchadnezzar II of Babylon. Some of the young nobility of Judah (such as Daniel, Shadrach, Meshach, and Abednego) were taken to Babylon.
In the following years, the court of Jerusalem was divided into two parties, in support of Egypt and Babylon. After Nebuchadnezzar was defeated in Battle in 601 BCE by Egypt, Judah revolted against Babylon, culminating in a three-month siege of Jerusalem beginning in late 598 BCE. Jehoiakim, the king of Judah, died during the siege, and was succeeded by his son Jehoiachin (also called Jeconiah) at the age of eighteen. The city fell on 2 Adar (March 16) 597 BCE, and Nebuchadnezzar pillaged Jerusalem and its Temple and took Jeconiah, his court and other prominent citizens (including the prophet Ezekiel) back to Babylon. Jehoiakim's uncle Zedekiah was appointed king in his place, but the exiles in Babylon continued to consider Jeconiah as their Exilarch, or rightful ruler.
Despite warnings by Jeremiah and others of the pro-Babylonian party, Zedekiah revolted against Babylon and entered into an alliance with Pharaoh Hophra. Nebuchadnezzar returned, defeated the Egyptians, and again besieged Jerusalem, resulting in the city's destruction in 587 BCE. Nebuchadnezzar destroyed the city wall and the Temple, together with the houses of the most important citizens. Zedekiah and his sons were captured, the sons were executed in front of Zedekiah, who was then blinded, and taken to Babylon with many others (Jer 52:10-11). Judah became a Babylonian province, called Yehud Medinata (Judah Province), putting an end to the independent Kingdom of Judah. (Because of the missing years in the Jewish calendar, rabbinic sources place the date of the destruction of the First Temple at 3338 HC (423 BCE) or 3358 HC (403 BCE)).
The first governor appointed by Babylon was Gedaliah, a native Judahite; he encouraged the many Jews who had fled to surrounding countries such as Moab, Ammon and Edom to return, and took steps to return the country to prosperity. Some time later, a surviving member of the royal family assassinated Gedaliah and his Babylonian advisors, prompting many refugees to seek safety in Egypt. By the end of the second decade of the 6th century, in addition to those who remained in Judah, there were significant Jewish communities in Babylon and in Egypt; this was the beginning of the later numerous Jewish communities living permanently outside Judah in the Jewish Diaspora.
According to the book of Ezra, the Persian Cyrus the Great ended the exile in 538 BCE, the year after he captured Babylon. The exile ended with the return under Zerubbabel the Prince (so-called because he was a descendant of the royal line of David) and Joshua the Priest (a descendant of the line of the former High Priests of the Temple) and their construction of the Second Temple in the period 521–516 BCE.
Archaeological and other non-Biblical evidence
Nebuchadnezzar's siege of Jerusalem, his capture of King Jeconiah, his appointment of Zedekiah in his place, and the plundering of the city in 597 BCE, as described in 2 Kings in the Bible, are corroborated by a passage in the Babylonian Chronicles::293
"In the seventh year, in the month of Kislev, the king of Akkad mustered his troops, marched to the Hatti-land, and encamped against the City of Judah and on the ninth day of the month of Adar he seized the city and captured the king. He appointed there a king of his own choice and taking heavy tribute brought it back to Babylon."
Jehoiachin's Rations Tablets, describing ration orders for a captive King of Judah, identified with King Jeconiah, have been discovered during excavations in Babylon, in the royal archives of Nebuchadnezzar. One of the tablets refers to food rations for "Ya’u-kīnu, king of the land of Yahudu" and five royal princes, his sons.
Nebuchadnezzar and the Babylonian forces returned in 588/586 BCE and rampaged through Judah, leaving clear archaeological evidence of destruction in many towns and settlements there.:294 Clay ostraca from this period, referred to as the Lachish letters, were discovered during excavations; one, which was probably written to the commander at Lachish from an outlying base, describes how the signal fires from nearby towns are disappearing: "And may (my lord) be apprised that we are watching for the fire signals of Lachish according to all the signs which my lord has given, because we cannot see Azeqah." This correlates with the book of Jeremiah, which states that Jerusalem, Lachish, and Azekah were the last cities to fall to the Babylonians. Archaeological finds from Jerusalem testify that virtually the whole city within the walls was burnt to rubble in 587 BCE and utterly destroyed.:295
The biblical books of 2 Kings and Jeremiah give varying numbers of exiles forcibly deported to Babylon, and at one time it was widely believed that virtually the entire population was taken into captivity there. Archaeological excavations and surveys, however, have enabled the population of Judah before the Babylonian destruction to be calculated with a high degree of confidence to have been approximately 75,000. Taking the different biblical numbers of exiles at their highest, 20,000, this would mean that at most 25pc of the population had been deported to Babylon, with the remaining 75pc staying in Judah.:306 Although Jerusalem was destroyed and depopulated, with large parts of the city remaining in ruins for 150 years, numerous other settlements in Judah continued to be inhabited, with no signs of disruption visible in archaeological studies.:307
The biblical book of Ezra includes two texts said to be decrees of Cyrus the Great, conqueror of the Neo-Babylonian Empire, allowing the deported Jews to return to their homeland after decades and ordering the Temple rebuilt. The differences in content and tone of the two decrees, one in Hebrew and one in Aramaic, have caused some scholars to question their authenticity. The Cyrus Cylinder, an ancient tablet on which is written a declaration in the name of Cyrus referring to restoration of temples and repatriation of exiled peoples, has often been taken as corroboration of the authenticity of the biblical decrees attributed to Cyrus, but other scholars point out that the cylinder's text is specific to Babylon and Mesopotamia and makes no mention of Judah or Jerusalem. Professor Lester L Grabbe asserted that the "alleged decree of Cyrus" regarding Judah, "cannot be considered authentic", but that there was a "general policy of allowing deportees to return and to re-establish cult sites". He also stated that archaeology suggests that the return was a "trickle" taking place over decades, rather than a single event.
As part of the Persian Empire, the former Kingdom of Judah became the province of Judah (Yehûd medîntā') with different borders, covering a smaller territory. The population of the province was greatly reduced from that of the kingdom, archeological surveys showing a population of around 30,000 people in the 5th to 4th centuries BCE.:308
An exhibition in Jerusalem has on display over 100 cuneiform tablets that detail trade in fruits and other commodities, taxes, debts, and credits accumulated between Jews driven from, or convinced to move from Jerusalem by King Nebuchadnezzar around 600 BCE. They include details on one exiled Judean family over four generations, all with Biblical Hebrew names, many still in use today.
Exilic literature and post-exilic revisions of the Torah/Pentateuch
The exilic period was a rich one for Hebrew literature. Biblical depictions of the exile include Book of Jeremiah 39–43 (which saw the exile as a lost opportunity); the final section of 2 Kings (which portrays it as the temporary end of history); 2 Chronicles (in which the exile is the "Sabbath of the land"); and the opening chapters of Ezra, which records its end. Other works from or about the exile include the stories in Daniel 1–6, Susanna, Bel and the Dragon, the "Story of the Three Youths" (1 Esdras 3:1–5:6), and the books of Tobit and Book of Judith.
The Priestly source, one of the four main sources of the Torah/Pentateuch in the Bible, is primarily a product of the post-exilic period when the former Kingdom of Judah had become the Persian province of Yehud. Also during this Persian period, the final redaction of the Pentateuch took place.:310
Significance in Jewish history
|This section does not cite any sources. (August 2013)|
In the Hebrew Bible, the captivity in Babylon is presented as a punishment for idolatry and disobedience to Yahweh in a similar way to the presentation of Israelite slavery in Egypt followed by deliverance. The Babylonian Captivity had a number of serious effects on Judaism and Jewish culture. For example, the current Hebrew alphabet was adopted during this period, replacing the Paleo-Hebrew alphabet.
This period saw the last high-point of biblical prophecy in the person of Ezekiel, followed by the emergence of the central role of the Torah in Jewish life. According to many historical-critical scholars, the Torah was altered during this time, and began to be regarded as the authoritative text for Jews. This period saw their transformation into an ethno-religious group who could survive without a central Temple.
This process coincided with the emergence of scribes and sages as Jewish leaders (see Ezra). Prior to exile, the people of Israel had been organized according to tribe. Afterwards, they were organized by smaller family groups. Only the tribe of Levi continued in its temple role after the return. After this time, there were always sizable numbers of Jews living outside Eretz Israel; thus, it also marks the beginning of the "Jewish diaspora", unless this is considered to have begun with the Assyrian Captivity of Israel.
In Rabbinic literature, Babylon was one of a number of metaphors for the Jewish diaspora. Most frequently the term "Babylon" meant the diaspora prior to the destruction of the Second Temple. The post-destruction term for the Jewish Diaspora was "Rome", or "Edom".
The following table is based on Rainer Albertz's work on Israel in exile. (Alternative dates are possible.)
|609 BCE||Death of Josiah|
|609–598 BCE||Reign of Jehoiakim (succeeded Jehoahaz, who replaced Josiah but reigned only 3 months) Began giving tribute to Nebuchadnezzar in 605 BCE. First deportation, including Daniel.|
|598/7 BCE||Reign of Jehoiachin (reigned 3 months). Siege and fall of Jerusalem.
Second deportation, 16 March 597
|597 BCE||Zedekiah made king of Judah by Nebuchadnezzar II of Babylon|
|594 BCE||Anti-Babylonian conspiracy|
|588 BCE||Siege and fall of Jerusalem. Solomon's Temple destroyed.
Third deportation July/August 587
|583 BCE||Gedaliah the Babylonian-appointed governor of Yehud Province assassinated.
Many Jews flee to Egypt and a possible fourth deportation to Babylon
|562 BCE||Release of Jehoiachin after 37 years in a Babylonian prison. He remains in Babylon|
|539 BCE||Persians conquer Babylon (October)|
|538 BCE||Decree of Cyrus allows Jews to return to Jerusalem|
|520–515 BCE||Return by many Jews to Yehud under Zerubbabel and Joshua the High Priest.
Foundations of Second Temple laid
- Avignon Papacy, sometimes called the "Babylonian Captivity of the Papacy".
- Coogan, Michael (2009). A Brief Introduction to the Old Testament. Oxford: Oxford University Press.
- Moore, Megan Bishop; Kelle, Brad E. (2011). Biblical History and Israel S Past: The Changing Study of the Bible and History. Wm. B. Eerdmans Publishing. pp. 357–358. ISBN 0802862608. Retrieved 11 June 2015.
Overall, the difficulty in calculation arises because the biblical texts provide varying numbers for the different deportations. The HB/OT’s conflicting figures for the dates, number, and victims of the Babylonian deportations become even more of a problem for historical reconstruction because, other than the brief reference to the first capture of Jerusalem (597) in the Babylonian Chronicle, historians have only the biblical sources with which to work.
- Dunn, James G.; Rogerston, John William (2003). Eerdmans Commentary on the Bible. Wm. B. Eerdmans Publishing. p. 545. ISBN 978-0-8028-3711-0.
- Ephraim Stern (November–December 2000). "The Babylonian Gap". Biblical Archaeology Review 26 (6).
From 604 BCE to 538 BCE —there is a complete gap in evidence suggesting occupation. ... I do not mean to imply that the country was uninhabited during the period between the Babylonian destruction and the Persian period. There were undoubtedly some settlements, but the population was very small. Many towns and villages were either completely or partly destroyed. The rest were barely functioning. International trade virtually ceased. Only two regions appear to have been spared this fate—the northern part of Judah (the region of Benjamin) and probably the land of Ammon, although the latter region awaits further investigation.
- Geoffrey Wigoder, The Illustrated Dictionary & Concordance of the Bible Pub. by Sterling Publishing Company, Inc. (2006)
- Dan Cohn-Sherbok, The Hebrew Bible, Continuum International, 1996, page x. ISBN 0-304-33703-X
- 2Kings 24:6–8
- Philip J. King, Jeremiah: An Archaeological Companion (Westminster John Knox Press, 1993), page 23.
- The Oxford History of the Biblical World, ed. by Michael D Coogan. Pub. by Oxford University Press, 1999. pg 350
- Yehud being the Babylonian equivalent of the Hebrew Yehuda, or "Judah", and "medinata" the word for province
- Rashi to Talmud Bavli, avodah zara p. 9a. Josephus, seder hadoroth year 3338
- malbim to ezekiel 24:1, abarbanel et al.
- "Second Temple Period (538 BCE. to 70 CE) Persian Rule". Biu.ac.il. Retrieved 2014-03-15.
- Harper's Bible Dictionary, ed. by Achtemeier, etc., Harper & Row, San Francisco, 1985, p.103
- Finkelstein, Israel; Silberman, Neil Asher (2001). The Bible Unearthed: Archaeology's New Vision of Ancient Israel and the Origin of Its Sacred Texts. Simon and Schuster. ISBN 978-0-684-86912-4.
- Thomas, David Winton (1958). Documents from Old Testament Times (1961 ed.). Edinburgh and London: Thomas Nelson. p. 84.
- Cf. 2Kings 24:12, 24:15–24:16, 25:27–25:30; 2Chronicles 36:9–36:10; Jeremiah 22:24–22:6, 29:2, 52:31–52:34; Ezekiel 17:12.
- "Babylonian Ration List: King Jehoiakhin in Exile, 592/1 BCE". COJS.org. The Center for Online Judaic Studies. Retrieved 23 August 2013.
Ya’u-kīnu, king of the land of Yahudu
- Translation from Aḥituv, Shmuel. Echoes from the Past. Jerusalem: CARTA Jerusalem, 2008, pg. 70.
- Jeremiah 34:7
- Bedford, Peter Ross (2001). Temple Restoration in Early Achaemenid Judah. Leiden: Brill. p. 112 (Cyrus edict section pp. 111–131). ISBN 9789004115095.
- Becking, Bob (2006). ""We All Returned as One!": Critical Notes on the Myth of the Mass Return". In Lipschitz, Oded; Oeming, Manfred. Judah and the Judeans in the Persian Period. Winona Lake, IN: Eisenbrauns. p. 8. ISBN 978-1-57506-104-7.
- Grabbe, Lester L. (2004). A History of the Jews and Judaism in the Second Temple Period: Yehud - A History of the Persian Province of Judah v. 1. T & T Clark. p. 355. ISBN 978-0567089984.
- Rainer Albertz, Israel in exile: the history and literature of the sixth century BCE (page 15 link) Society for Biblical Literature, 2003, pp.4–38
- Blum, Erhard (1998). "Issues and Problems in the Contemporary Debate Regarding the Priestly Writings". In Sarah Shectman, Joel S. Baden. The strata of the priestly writings: contemporary debate and future directions. Theologischer Verlag. pp. 32–33.
- A Concise History of the Jewish People | Naomi E. Pasachoff, Robert J. Littma | Rowman & Littlefield, 2005 | pg 43
- Rainer Albertz, Israel in exile: the history and literature of the sixth century BCE, p.xxi.
- 2 Kings 25:27
|Wikimedia Commons has media related to Babylonian captivity.|
||This article's further reading may not follow Wikipedia's content policies or guidelines. Please improve this article by removing excessive, less relevant or many publications with the same point of view; or by incorporating the relevant publications into the body of the article through appropriate citations. (August 2013)|
- Yehud Medinata map, CET – Center For Educational technology
- Yehud Medinata Border map, CET – Center For Educational technology
- Peter R. Ackroyd, "Exile and Restoration: A Study of Hebrew Thought of the Sixth Century B.C." (SCM Press, 1968)
- Rainer Albertz, Bob Becking, "Yahwism after the Exile" Van Gorcum, 2003)
- Blenkinsopp, Joseph, "Judaism, the first phase: the place of Ezra and Nehemiah in the origins of Judaism" (Eerdmans, 2009)
- Nodet, Étienne, "A search for the origins of Judaism: from Joshua to the Mishnah" (Sheffield Academic Press, 1999, original edition Editions du Cerf, 1997)
- Becking, Bob, and Korpel, Marjo Christina Annette (eds), "The Crisis of Israelite Religion: Transformation of Religious Tradition in Exilic & Post-Exilic Times" (Brill, 1999)
- Bedford, Peter Ross, "Temple restoration in early Achaemenid Judah" (Brill, 2001)
- Berquist, Jon L., "Approaching Yehud: new approaches to the study of the Persian period" (Society of Biblical Literature, 2007)
- Grabbe, Lester L., "A history of the Jews and Judaism in the Second Temple Period", vol.1 (T&T Clark International, 2004)
- Levine, Lee I., "Jerusalem: portrait of the city in the second Temple period (538 B.C.E.-70 C.E.)" (Jewish Publication Society, 2002)
- Lipschitz, Oded, "The Fall and Rise of Jerusalem" (Eisenbrauns, 2005)
- Lipschitz, Oded, and Oeming, Manfred (eds), "Judah and the Judeans in the Persian period" (Eisenbrauns, 2006)
- Lipschitz, Oded, and Oeming, Manfred (eds), "Judah and the Judeans in the fourth century B.C.E." (Eisenbrauns, 2006)
- Middlemas, Jill Anne, "The troubles of templeless Judah" (Oxford University Press, 2005)
- Stackert, Jeffrey, "Rewriting the Torah: literary revision in Deuteronomy and the holiness code" (Mohr Siebeck, 2007)
- Vanderkam, James, "An introduction to early Judaism" (Eerdmans, 2001)
- "Babylonian Captivity". Encyclopædia Britannica (11th ed.). 1911.
- "Babylonish Captivity". New International Encyclopedia. 1905. | https://en.wikipedia.org/wiki/Edict_of_Restoration |
4.25 | When you're dealing with quadratic equations, it can be really helpful to identify a, b, and c. These values are used to find the axis of symmetry, the discriminant, and even the roots using the quadratic formula. It's no question that it's important to know how to identify these values in a quadratic equation. This tutorial shows you how!
Constants are parts of algebraic expressions that don't change. Check out this tutorial to see exactly what a constant looks like and why it doesn't change.
There is a bunch of vocabulary that you just need to know when it comes to algebra, and coefficient is one of the key words that you have to feel 100% comfortable with. Check out the tutorial and let us know if you want to learn more about coefficients!
Polynomials are those expressions that have variables raised to all sorts of powers and multiplied by all types of numbers. When you work with polynomials you need to know a bit of vocabulary, and one of the words you need to feel comfortable with is 'term'. So check out this tutorial, where you'll learn exactly what a 'term' in a polynomial is all about.
Got a quadratic polynomial? Want to put it in standard form? Watch this tutorial to learn the steps it takes to make sure a quadratic polynomial is in standard form!
The axis of symmetry is the vertical line that goes through the vertex of a quadratic equation. There's even a formula to help find it! In this tutorial, you'll see how to find the axis of symmetry for a given quadratic equation. | http://www.virtualnerd.com/algebra-1/quadratic-equations-functions/graphing/graph-basics/coefficients-a-b-c |
4 | Recently, I have been reading Visual Supports for People with Autism: A Guide for Parents and Professionals (Topics in Autism) and knew that I would need help teaching Lira how to multiply. I needed a way for her to visualize the multiplication problems, not just so she would get the correct product but so that she could understand what the purpose was. I began searching for worksheets with visual supports but all I could find were basic facts sheets for drills. Maybe I looked in the wrong places or maybe I needed to create them myself.
I built the worksheets so that a child could see the correspondence between the factors in the equation. For instance, one popsicle has two sticks. Therefore, one times two equals two. Or, each mushroom has eight spots so if you have two mushrooms, two times eight is sixteen. I also encourage skip counting so a child can make the connections between patterns and multiplication.
Finally, the child can refer to the visual cues to complete the multiplication problems.
Lira loved it. She danced because she understood what multiplying was and how to do it. (Please note that Lira is only in the second grade and I aimed these printables at that age group. However, I believe they can be used for all ages, especially for those who are struggling to make the connections necessary to understand how multiplying works.)
I wanted to also make multiplication flash cards for remembering the facts but included the visual prompts from the worksheets as a reminder. The product is left blank so that these can be used easily for review.
For our family’s use of the cards, I laminated them and then punched a hole to keep the cards together with a binder ring.
If you wanted your child to do these as a self-guided activity, I recommend writing the answers on the back.
Free Multiplication Worksheets ~
- Multiplication Worksheets (all in a zipped file)
- Multiplying by 1 Worksheet
- Multiplying by 2 Worksheet
- Multiplying by 3 Worksheet
- Multiplying by 4 Worksheet
- Multiplying by 5 Worksheet
- Multiplying by 6 Worksheet
- Multiplying by 7 Worksheet
- Multiplying by 8 Worksheet
- Multiplying by 9 Worksheet
- Multiplying by 10 Worksheet
- Multiplying by 11 Worksheet
- Multiplying by 12 Worksheet
- Multiplication Facts (all in a zipped file)
- Multiplication Facts 1x 2x & All Covers
- Multiplication Facts 3x 4x 5x
- Multiplication Facts 6x 7x 8x
- Multiplication Facts 9x 10x
- Multiplication Facts ~ extensions to include 11x 12x
- Multiplication Facts 11x 12x
Boost Your Income!
Sign up for the FREE email class (coming in January!) and find your side hustle. Boost your income! | http://www.meetpenny.com/2012/03/free-multiplication-worksheets-fact-cards-with-visual-cues/ |
4.28125 | A Novel Way of Doing Chemistry
Researchers have shown that they can use mechanical force to make a molecule more reactive.
By tugging on two sides of a specially designed molecule, chemists have been able to change its shape so that it becomes much more reactive. The researchers were able to control the reactivity of the molecule by applying a mechanical force on its chemical bonds. The energy for such a chemical transformation typically comes from light, heat, or electricity. “The key thing is that force will trigger the molecule to become reactive … and that [reactive state] would go on to do useful and productive chemistry,” says Jeffrey Moore, a chemistry and materials-science professor at the University of Illinois at Urbana-Champaign who published the work in Nature.
The finding could lead to self-healing materials, in which molecules under stress would change shape and react to make the material stronger. Another use would be polymers that react and light up right when they are damaged. “Say you’re a parachutist and you want to know whether the cords are still adequate; you could just get a visual read on them,” Moore says. The development could also lead to a new way of doing chemistry by using force instead of heat, light, or catalysts.
The idea of using force to make a molecule reactive has existed in organic chemistry since the 1940s, but until now it essentially boiled down to breaking long plastic polymers into smaller pieces. The process has not been very useful because no one has been able to control where the polymer breaks.
Moore and his colleagues were able to control the structure of a ring-shaped organic molecule, not just break it apart. The researchers attach polymer chains to the two sides of the molecule. Then they apply ultrasound frequency to a solution containing the molecule; the ultrasound creates a mechanical force along the polymer chains and pulls them in opposite directions. The chains tug at the molecule and break a chemical bond in the ring, which triggers a more extensive rearrangement of the molecule and makes it reactive.
Moreover, the researchers found that force rearranges the molecule in a way that is different than if the molecule was exposed to other triggers, like heat and light. “One of the really beautiful, unexpected things about this work is that the reaction that occurs … is not the reaction that you would expect to happen,” says Stephen Craig, a chemistry professor at Duke University. “They pull on one molecule and make it rearrange into something that normally there is no way to rearrange into with heat or light.”
Once the molecule is in this new arrangement, it reacts with another molecule that can be detected with ultraviolet light. Moore says that this concept could be used to make polymers that would give some kind of visual cue when they are just about to break. Or, he says, researchers could further develop the principle and make materials in which, if researchers apply pressure at a certain point, the molecules in that area become reactive and stick together–a novel strategy for creating self-healing materials.
Craig contrasts the new work with the old-fashioned use of force in chemistry, which was, essentially, to break polymers into smaller fragments. What makes the new work exciting, he says, is that it shows that researchers can use force to make molecules come together and create bigger molecules. “As a long-term vision, one can imagine materials that get stronger, rather than weaker, as they experience greater and greater forces,” Craig says.
Others think that this might be the start of a new way of doing chemistry. “Conceptually, what it demonstrates is that stress can become a new tool to do organic chemistry,” says Virgil Percec, a chemistry professor at the University of Pennsylvania. “It’s a very elegant and beautiful demonstration.” By controlling chemical reactions using small mechanical forces on molecules, “we might be able to eliminate the need of environmentally unfriendly catalysts to do various reactions,” he says.
Next, says Moore, his team plans to find out whether the concept holds in solids and not just in polymer solutions. | https://www.technologyreview.com/s/407565/a-novel-way-of-doing-chemistry/ |
4.34375 | The definition of the unit system and common prefixes.
How to use and convert scientific units.
How to express a vector algebraically in terms of the unit vectors i and j.
How to describe the unit circle. How to draw the unit circle and label its parts. How to strategize about finding coordinates on the unit circle.
Definitions and notation of ratios, and when we use them
How to use the unit circle to derive the tangent identity and the Pythagorean identity.
How to use the unit circle to derive identities that are useful in graphing the reciprocal trigonometric functions.
How to graph y = tan(theta) for 0 <= theta < pi/2.
How we define sine and cosine for all angle measures using the unit circle.
How to compute the cross product of two 3D vectors.
How to define the tangent function for all angles.
How to define the reciprocal trigonometric functions, the reciprocal identities, and the Pythagorean identities. | https://www.brightstorm.com/tag/unit-prefix-deka/ |
4.21875 | |Classification and external resources|
Stomach cancer, also known as gastric cancer, is cancer developing from the lining of the stomach. Early symptoms may include heartburn, upper abdominal pain, nausea and loss of appetite. Later signs and symptoms may include weight loss, yellow skin and whites of the eyes, vomiting, difficulty swallowing, and blood in the stool among others. The cancer may spread from the stomach to other parts of the body, particularly the liver, lungs, bones, lining of the abdomen and lymph nodes.
The most common cause is infection by the bacterium Helicobacter pylori, which accounts for more than 60% of cases. Certain types of H. pylori have greater risks than others. Other common causes include eating pickled vegetables and smoking. About 10% of cases run in families and between 1% and 3% of cases are due to genetic syndromes inherited from a person's parents such as hereditary diffuse gastric cancer. Most cases of stomach cancers are gastric carcinomas. This type can be divided into a number of subtypes. Lymphomas and mesenchymal tumors may also develop within the stomach. Most of the time, stomach cancer develops through a number of stages over a number of years. Diagnosis is usually by biopsy done during endoscopy. This is then followed by medical imaging to determine if the disease has spread to other parts of the body. Japan and South Korea, two countries that have high rates of disease, screen for stomach cancer.
A Mediterranean diet lowers the risk of cancer as does the stopping of smoking. There is tentative evidence that treating H. pylori decreases the future risk. If cancer is treated early, many cases can be cured. Treatments may include some combination of surgery, chemotherapy, radiation therapy, and targeted therapy. If treated late, palliative care may be advised. Outcomes are often poor with a less than 10% 5-year survival rate globally. This is largely because most people with the condition present with advanced disease. In the United States 5-year survival is 28% while in South Korea it is over 65% partly due to screening efforts.
Globally stomach cancer is the fifth leading cause of cancer and the third leading cause of death from cancer making up 7% of cases and 9% of deaths. In 2012 it occurred in 950,000 people and caused 723,000 deaths. Before the 1930s in much of the world, including most Western developed countries, it was the most common cause of death from cancer. Rates of death have been decreasing in many areas of the world since then. This is believed to be due to the eating of less salted and pickled foods as a result of the development of refrigeration as a method of keeping food fresh. Stomach cancer occurs most commonly in East Asia and Eastern Europe and it occurs twice as often in males as in females.
Signs and symptoms
Stomach cancer is often either asymptomatic (producing no noticeable symptoms) or it may cause only nonspecific symptoms (symptoms that are specific not only to stomach cancer, but also to other related or unrelated disorders) in its early stages. By the time symptoms occur, the cancer has often reached an advanced stage (see below) and may have also metastasized (spread to other, perhaps distant, parts of the body), which is one of the main reasons for its relatively poor prognosis. Stomach cancer can cause the following signs and symptoms:
Early cancers may be associated with indigestion or a burning sensation (heartburn). However, less than 1 in every 50 people referred for endoscopy due to indigestion has cancer. Abdominal discomfort and loss of appetite, especially for meat, can occur.
Gastric cancers that have enlarged and invaded normal tissue can cause weakness, fatigue, bloating of the stomach after meals, abdominal pain in the upper abdomen, nausea and occasional vomiting, diarrhea or constipation. Further enlargement may cause weight loss or bleeding with vomiting blood or having blood in the stool, the latter apparent as black discolouration (melena) and sometimes leading to anemia. Dysphagia suggests a tumour in the cardia or extension of the gastric tumour into the esophagus.
Gastric cancer is a multifactorial disease. Helicobacter pylori infection is an essential risk factor in 65–80% of gastric cancers, but only 2% of people with Helicobacter infections develop stomach cancer. The mechanism by which H. pylori induces stomach cancer potentially involves chronic inflammation, or the action of H. pylori virulence factors such as CagA. It was estimated that Epstein–Barr virus is responsible for 84,000 cases per year.
Smoking increases the risk of developing gastric cancer significantly, from 40% increased risk for current smokers to 82% increase for heavy smokers. Gastric cancers due to smoking mostly occur in the upper part of the stomach near the esophagus. Some studies show increased risk with alcohol consumption as well.
Dietary factors are not proven causes, but some foods including smoked foods, salt and salt-rich foods, red meat, processed meat, pickled vegetables, and bracken are associated with a higher risk of stomach cancer. Nitrates and nitrites in cured meats can be converted by certain bacteria, including H. pylori, into compounds that have been found to cause stomach cancer in animals. On the other hand, fresh fruit and vegetable intake, citrus fruit intake, and antioxidant intake are associated with a lower risk of stomach cancer. A Mediterranean diet is also associated with lower rates of stomach cancer, as is regular aspirin use.
Approximately 10% of cases show a genetic component.
People may possess certain risk factors, such as those that are physical or genetic, that can alter their susceptibility for gastric cancer. Obesity is one such physical risk factor that has been found to increase the risk of gastric adenocarcinoma by contributing to the development of gastroesophageal reflux disease (GERD). The exact mechanism by which obesity causes GERD is not completely known. Studies hypothesize that increased dietary fat leading to increased pressure on the stomach and the lower esophageal sphincter, due to excess adipose tissue, could play a role, yet no statistically significant data has been collected. However, the risk of gastric cardia adenocarcinoma, with GERD present, has been found to increase more than 2 times for an obese person.
A genetic risk factor for gastric cancer is a genetic defect of the CDH1 gene known as hereditary diffuse gastric cancer (HDGC). The CDH1 gene, which codes for E-cadherin, lies on the 16th chromosome. When the gene experiences a particular mutation, gastric cancer develops through a mechanism that is not fully understood. This mutation is considered autosomal dominant meaning that half of a carrier’s children will likely experience the same mutation. Diagnosis of hereditary diffuse gastric cancer usually takes place when at least two cases involving a family member, such as a parent or grandparent, are diagnosed, with at least one diagnosed before the age of 50. The diagnosis can also be made if there are at least three cases in the family, in which case age is not considered.
The International Cancer Genome Consortium is leading efforts to identify genomic changes involved in stomach cancer. A very small percentage of diffuse-type gastric cancers (see Histopathology below) arise from an inherited abnormal CDH1 gene. Genetic testing and treatment options are available for families at risk.
Other factors associated with increased risk are AIDS, diabetes, pernicious anemia, chronic atrophic gastritis, Menetrier's disease (hyperplastic, hypersecretory gastropathy), and intestinal metaplasia.
To find the cause of symptoms, the doctor asks about the patient's medical history, does a physical exam, and may order laboratory studies. The patient may also have one or all of the following exams:
- Gastroscopic exam is the diagnostic method of choice. This involves insertion of a fibre optic camera into the stomach to visualise it.
- Upper GI series (may be called barium roentgenogram).
- Computed tomography or CT scanning of the abdomen may reveal gastric cancer, but is more useful to determine invasion into adjacent tissues, or the presence of spread to local lymph nodes. Wall thickening of more than 1 cm that is focal, eccentric and enhancing favours malignancy.
In 2013, Chinese and Israeli scientists reported a successful pilot study of a breathalyzer-style breath test intended to diagnose stomach cancer by analyzing exhaled chemicals without the need for an intrusive endoscopy. A larger-scale clinical trial of this technology was completed in 2014.
Abnormal tissue seen in a gastroscope examination will be biopsied by the surgeon or gastroenterologist. This tissue is then sent to a pathologist for histological examination under a microscope to check for the presence of cancerous cells. A biopsy, with subsequent histological analysis, is the only sure way to confirm the presence of cancer cells.
Various gastroscopic modalities have been developed to increase yield of detected mucosa with a dye that accentuates the cell structure and can identify areas of dysplasia. Endocytoscopy involves ultra-high magnification to visualise cellular structure to better determine areas of dysplasia. Other gastroscopic modalities such as optical coherence tomography are also being tested investigationally for similar applications.
A number of cutaneous conditions are associated with gastric cancer. A condition of darkened hyperplasia of the skin, frequently of the axilla and groin, known as acanthosis nigricans, is associated with intra-abdominal cancers such as gastric cancer. Other cutaneous manifestations of gastric cancer include tripe palms (a similar darkening hyperplasia of the skin of the palms) and the Leser-Trelat sign, which is the rapid development of skin lesions known as seborrheic keratoses.
- Gastric adenocarcinoma is a malignant epithelial tumour, originating from glandular epithelium of the gastric mucosa. Stomach cancers are overwhelmingly adenocarcinomas (90%). Histologically, there are two major types of gastric adenocarcinoma (Lauren classification): intestinal type or diffuse type. Adenocarcinomas tend to aggressively invade the gastric wall, infiltrating the muscularis mucosae, the submucosa and thence the muscularis propria. Intestinal type adenocarcinoma tumour cells describe irregular tubular structures, harbouring pluristratification, multiple lumens, reduced stroma ("back to back" aspect). Often, it associates intestinal metaplasia in neighbouring mucosa. Depending on glandular architecture, cellular pleomorphism and mucosecretion, adenocarcinoma may present 3 degrees of differentiation: well, moderate and poorly differentiated. Diffuse type adenocarcinoma (mucinous, colloid, linitis plastica, leather-bottle stomach) tumour cells are discohesive and secrete mucus, which is delivered in the interstitium, producing large pools of mucus/colloid (optically "empty" spaces). It is poorly differentiated. If the mucus remains inside the tumour cell, it pushes the nucleus to the periphery: "signet-ring cell".
- Around 5% of gastric malignancies are lymphomas (MALTomas, or MALT lymphoma).
- Carcinoid and stromal tumors may also occur.
If cancer cells are found in the tissue sample, the next step is to stage, or find out the extent of the disease. Various tests determine whether the cancer has spread and, if so, what parts of the body are affected. Because stomach cancer can spread to the liver, the pancreas, and other organs near the stomach as well as to the lungs, the doctor may order a CT scan, a PET scan, an endoscopic ultrasound exam, or other tests to check these areas. Blood tests for tumor markers, such as carcinoembryonic antigen (CEA) and carbohydrate antigen (CA) may be ordered, as their levels correlate to extent of metastasis, especially to the liver, and the cure rate.
Staging may not be complete until after surgery. The surgeon removes nearby lymph nodes and possibly samples of tissue from other areas in the abdomen for examination by a pathologist.
- Stage 0. Limited to the inner lining of the stomach. Treatable by endoscopic mucosal resection when found very early (in routine screenings); otherwise by gastrectomy and lymphadenectomy without need for chemotherapy or radiation.
- Stage I. Penetration to the second or third layers of the stomach (Stage 1A) or to the second layer and nearby lymph nodes (Stage 1B). Stage 1A is treated by surgery, including removal of the omentum. Stage 1B may be treated with chemotherapy (5-fluorouracil) and radiation therapy.
- Stage II. Penetration to the second layer and more distant lymph nodes, or the third layer and only nearby lymph nodes, or all four layers but not the lymph nodes. Treated as for Stage I, sometimes with additional neoadjuvant chemotherapy.
- Stage III. Penetration to the third layer and more distant lymph nodes, or penetration to the fourth layer and either nearby tissues or nearby or more distant lymph nodes. Treated as for Stage II; a cure is still possible in some cases.
- Stage IV. Cancer has spread to nearby tissues and more distant lymph nodes, or has metastasized to other organs. A cure is very rarely possible at this stage. Some other techniques to prolong life or improve symptoms are used, including laser treatment, surgery, and/or stents to keep the digestive tract open, and chemotherapy by drugs such as 5-fluorouracil, cisplatin, epirubicin, etoposide, docetaxel, oxaliplatin, capecitabine or irinotecan.
In a study of open-access endoscopy in Scotland, patients were diagnosed 7% in Stage I 17% in Stage II, and 28% in Stage III. A Minnesota population was diagnosed 10% in Stage I, 13% in Stage II, and 18% in Stage III. However, in a high-risk population in the Valdivia Province of southern Chile, only 5% of patients were diagnosed in the first two stages and 10% in stage III.
Getting rid of H. pylori in those who are infected decreases the risk of stomach cancer, at least in those who are Asian. A 2014 meta-analysis of observational studies found that a diet high in fruits, mushrooms, garlic, soybeans, and green onions was associated with a lower risk of stomach cancer in the Korean population. Low doses of vitamins, especially from a healthy diet, decrease the risk of stomach cancer. A previous review of antioxidant supplementation did not find supporting evidence and possibly worse outcomes.
Cancer of the stomach is difficult to cure unless it is found at an early stage (before it has begun to spread). Unfortunately, because early stomach cancer causes few symptoms, the disease is usually advanced when the diagnosis is made. Treatment for stomach cancer may include surgery, chemotherapy, and/or radiation therapy. New treatment approaches such as biological therapy and improved ways of using current methods are being studied in clinical trials.
Surgery remains the only curative therapy for stomach cancer. Of the different surgical techniques, endoscopic mucosal resection (EMR) is a treatment for early gastric cancer (tumor only involves the mucosa) that has been pioneered in Japan, but is also available in the United States at some centers. In this procedure, the tumor, together with the inner lining of stomach (mucosa), is removed from the wall of the stomach using an electrical wire loop through the endoscope. The advantage is that it is a much smaller operation than removing the stomach. Endoscopic submucosal dissection (ESD) is a similar technique pioneered in Japan, used to resect a large area of mucosa in one piece. If the pathologic examination of the resected specimen shows incomplete resection or deep invasion by tumor, the patient would need a formal stomach resection.
Those with metastatic disease at the time of presentation may receive palliative surgery and while it remains controversial, due to the possibility of complications from the surgery itself and the fact that it may delay chemotherapy the data so far is mostly positive, with improved survival rates being seen in those treated with this approach.
The use of chemotherapy to treat stomach cancer has no firmly established standard of care. Unfortunately, stomach cancer has not been particularly sensitive to these drugs, and chemotherapy, if used, has usually served to palliatively reduce the size of the tumor, relieve symptoms of the disease and increase survival time. Some drugs used in stomach cancer treatment have included: 5-FU (fluorouracil) or its analog capecitabine, BCNU (carmustine), methyl-CCNU (semustine) and doxorubicin (Adriamycin), as well as mitomycin C, and more recently cisplatin and taxotere, often using drugs in various combinations. The relative benefits of these different drugs, alone and in combination, are unclear. Clinical researchers have explored the benefits of giving chemotherapy before surgery to shrink the tumor, or as adjuvant therapy after surgery to destroy remaining cancer cells.
Recently, treatment with human epidermal growth factor receptor 2 (HER2) inhibitor, trastuzumab, has been demonstrated to increase overall survival in inoperable locally advanced or metastatic gastric carcinoma over-expressing the HER2/neu gene. In particular, HER2 is overexpressed in 13-22% of patients with gastric cancer. Of note, HER2 overexpression in gastric neoplasia is heterogeneous and comprises a minority of tumor cells (less than 10% of gastric cancers overexpress HER2 in more than 5% of tumor cells). Hence, this heterogeneous expression should be taken into account for HER2 testing, particularly in small samples such as biopsies, requiring the evaluation of more than one bioptic sample.
The prognosis of stomach cancer is generally poor, due to the fact the tumour has often metastasised by the time of discovery and the fact that most people with the condition are elderly (median age is between 70 and 75 years) at presentation. The five-year survival rate for stomach cancer is reported to be less than 10 percent.
Worldwide, stomach cancer is the fifth most common cancer with 952,000 cases diagnosed in 2012. It is more common in men and in developing countries. In 2012, it represented 8.5% of cancer cases in men, making it the fourth most common cancer in men. In 2012 number of deaths were 700,000 having decreased slightly from 774,000 in 1990 making it the third leading cause of cancer death after lung cancer and liver cancer.
Less than 5% of stomach cancers occur in people under 40 years of age with 81.1% of that 5% in the age-group of 30 to 39 and 18.9% in the age-group of 20 to 29.
In 2014, stomach cancer accounted for 0.61% of deaths (13,303 cases) in the United States. In China, stomach cancer accounted for 3.56% of all deaths (324,439 cases). The highest rate of stomach cancer was in Mongolia, at 28 cases per 100,000 people.
In the United Kingdom, stomach cancer is the fifteenth most common cancer (around 7,100 people were diagnosed with stomach cancer in 2011), and it is the tenth most common cause of cancer death (around 4,800 people died in 2012).
Incidence and mortality rates of gastric cancer vary greatly in Africa. The GLOBOCAN system is currently the most widely used method to compare these rates between countries, but African incidence and mortality rates are seen to differ among countries possibly due to the lack of universal access to a registry system for all countries. Variation as drastic as estimated rates from 0.3/100000 in Botswana to 20.3/100000 in Mali have been observed. In Uganda, the incidence of gastric cancer has increased from the 1960s measurement of 0.8/100000 to 5.6/100000. Gastric cancer, though present, is relatively low when compared to countries with high incidence like Japan or China. One suspected cause of the variation within Africa and between other countries is due to different strains of the Helicobacter pylori bacteria. The trend commonly seen is that H. pylori infection increases the risk for gastric cancer, however this is not the case in Africa giving this phenomenon the name the “African enigma.” Although this bacteria is found in Africa, evidence has supported that different strains with mutations in the bacterial genotype may contribute to the difference in cancer development between African countries and others outside of the continent. However, increasing access to health care and treatment measures have been commonly associated with the rising incidence, particularly in Uganda.
The stomach is a muscular organ of the gastrointestinal tract that holds food and begins the digestive process by secreting gastric juice. The most common cancers of the stomach are adenocarcinomas but other histological types have been reported. Signs vary but may include vomiting (especially if blood is present), weight loss, anemia, and lack of appetite. Bowel movements may be dark and tarry in nature. In order to determine whether cancer is present in the stomach, special X-rays and/or abdominal ultrasound may be performed. Gastroscopy, a test using an instrument called endoscope to examine the stomach, is a useful diagnostic tool that can also take samples of the suspected mass for histopathological analysis to confirm or rule out cancer. The most definitive method of cancer diagnosis is through open surgical biopsy. Most stomach tumors are malignant with evidence of spread to lymph nodes or liver, making treatment difficult. Except for lymphoma, surgery is the most frequent treatment option for stomach cancers but it is associated with significant risks.
- "Stomach (Gastric) Cancer". NCI. Retrieved 1 July 2014.
- "Gastric Cancer Treatment (PDQ®)". NCI. 2014-04-17. Retrieved 1 July 2014.
- Ruddon, Raymond W. (2007). Cancer biology (4th ed.). Oxford: Oxford University Press. p. 223. ISBN 9780195175431.
- Sim, edited by Fiona; McKee, Martin (2011). Issues in public health (2nd ed.). Maidenhead: Open University Press. p. 74. ISBN 9780335244225.
- World Cancer Report 2014. World Health Organization. 2014. pp. Chapter 5.4. ISBN 9283204298.
- Chang, A. H.; Parsonnet, J. (2010). "Role of Bacteria in Oncogenesis". Clinical Microbiology Reviews 23 (4): 837–857. doi:10.1128/CMR.00012-10. ISSN 0893-8512. PMC 2952975. PMID 20930075.
- "Stomach (Gastric) Cancer Prevention (PDQ®)". NCI. 2014-02-27. Retrieved 1 July 2014.
- Orditura, M; Galizia, G; Sforza, V; Gambardella, V; Fabozzi, A; Laterza, MM; Andreozzi, F; Ventriglia, J; Savastano, B; Mabilia, A; Lieto, E; Ciardiello, F; De Vita, F (February 2014). "Treatment of gastric cancer." (PDF). World Journal of Gastroenterology 20 (7): 1635–49. doi:10.3748/wjg.v20.i7.1635. PMC 3930964. PMID 24587643. Cite uses deprecated parameter
- "SEER Stat Fact Sheets: Stomach Cancer". NCI. Retrieved 18 June 2014.
- "Chapter 1.1". World Cancer Report 2014. World Health Organization. 2014. ISBN 9283204298.
- Hochhauser, Jeffrey Tobias, Daniel (2010). Cancer and its management (6th ed.). Chichester, West Sussex, UK: Wiley-Blackwell. p. 259. ISBN 9781444306378.
- Khleif, Edited by Roland T. Skeel, Samir N. (2011). Handbook of cancer chemotherapy (8th ed.). Philadelphia: Wolter Kluwer. p. 127. ISBN 9781608317820.
- Joseph A Knight (2010). Human Longevity: The Major Determining Factors. Author House. p. 339. ISBN 9781452067223.
- Moore, edited by Rhonda J.; Spiegel, David (2004). Cancer, culture, and communication. New York: Kluwer Academic. p. 139. ISBN 9780306478857.
- "Statistics and outlook for stomach cancer". Cancer Research UK. Retrieved 19 February 2014.
- "Guidance on Commissioning Cancer Services Improving Outcomes in Upper Gastro-intestinal Cancers" (PDF). NHS. Jan 2001.
- Lee YY, Derakhshan MH (Jun 2013). "Environmental and lifestyle risk factors of gastric cancer". Arch. Iran Med. 16 (6): 358–65. PMID 23725070.
- "Proceedings of the fourth Global Vaccine Research Forum" (PDF). Initiative for Vaccine Research team of the Department of Immunization, Vaccines and Biologicals. WHO. April 2004. Retrieved 2009-05-11.
Epidemiology of Helicobacter pylori and gastric cancer…
- González CA, Sala N, Rokkas T; Sala; Rokkas (2013). "Gastric cancer: epidemiologic aspects". Helicobacter 18 (Supplement 1): 34–38. doi:10.1111/hel.12082. PMID 24011243.
- Hatakeyama, M. & Higashi, H; Higashi (2005). "Helicobacter pylori CagA: a new paradigm for bacterial carcinogenesis". Cancer Science 96 (12): 835–843. doi:10.1111/j.1349-7006.2005.00130.x. PMID 16367902.
- "What Are The Risk Factors For Stomach Cancer(Website)". American Cancer Society. Retrieved 2010-03-31.
- Nomura A, Grove JS, Stemmermann GN, Severson RK; Grove; Stemmermann; Severson (1990). "Cigarette smoking and stomach cancer". Cancer Research 50 (21): 7084. PMID 2208177.
- Trédaniel J, Boffetta P, Buiatti E, Saracci R, Hirsch A; Boffetta; Buiatti; Saracci; Hirsch (August 1997). "Tobacco smoking and gastric cancer: Review and meta-analysis". International Journal of Cancer 72 (4): 565–73. doi:10.1002/(SICI)1097-0215(19970807)72:4<565::AID-IJC3>3.0.CO;2-O. PMID 9259392.
- Thrumurthy SG, Chaudry MA, Hochhauser D, Ferrier K, Mughal M; Chaudry; Hochhauser; Mughal (2013). "The diagnosis and management of gastric cancer". British Medical Journal 347 (16): 1695–6. doi:10.1136/bmj.f6367. PMID 24291271.
- Tumors of the GI Tract at Merck Manual of Diagnosis and Therapy Professional Edition
- Jakszyn P, González CA; Gonzalez (2006). "Nitrosamine and related food intake and gastric and oesophageal cancer risk: A systematic review of the epidemiological evidence" (PDF). World J Gastroenterol 12 (27): 4296–4303. PMC 4087738. PMID 16865769.
- Alonso-Amelot ME, Avendaño M; Avendaño (March 2002). "Human carcinogenesis and bracken fern: a review of the evidence". Current Medicinal Chemistry 9 (6): 675–86. doi:10.2174/0929867023370743. PMID 11945131.
- Buckland G, Agudo A, Lujan L, Jakszyn P, Bueno-De-Mesquita HB, Palli D, Boeing H, Carneiro F, Krogh V; Agudo; Luján; Jakszyn; Bueno-De-Mesquita; Palli; Boeing; Carneiro; Krogh; Sacerdote; Tumino; Panico; Nesi; Manjer; Regnér; Johansson; Stenling; Sanchez; Dorronsoro; Barricarte; Navarro; Quirós; Allen; Key; Bingham; Kaaks; Overvad; Jensen; Olsen; et al. (2009). "Adherence to a Mediterranean diet and risk of gastric adenocarcinoma within the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort study". American Journal of Clinical Nutrition 91 (2): 381–90. doi:10.3945/ajcn.2009.28209. PMID 20007304.
- Josefssson, M.; Ekblad, E. (2009). "22. Sodium Iodide Symporter (NIS) in Gastric Mucosa: Gastric Iodide Secretion". In Preedy, Victor R.; Burrow, Gerard N.; Watson, Ronald. Comprehensive Handbook of Iodine: Nutritional, Biochemical, Pathological and Therapeutic Aspects. Elsevier. pp. 215–220. ISBN 978-0-12-374135-6.
- Venturi, Sebastiano (2011). "Evolutionary Significance of Iodine". Current Chemical Biology- 5 (3): 155–162. doi:10.2174/187231311796765012. ISSN 1872-3136.
- Venturi II, S.; Donati, F.M.; Venturi, A.; Venturi, M.;; Venturi, A; Venturi, M; Grossi, L; Guidi, A (2000). "Role of iodine in evolution and carcinogenesis of thyroid, breast and stomach.". Adv Clin Path 4 (1): 11–17. PMID 10936894.
- Venturi, S.; Donati, F.M.; Venturi, A.; Venturi, M. (2000). "Environmental Iodine Deficiency: A Challenge to the Evolution of Terrestrial Life?". Thyroid 10 (8): 727–9. doi:10.1089/10507250050137851. PMID 11014322.
- Chandanos, Evangelos (December 2007). Estrogen in the development of esophageal and gastric adenocarcinoma (PDF) (Doctoral thesis). Karolinska Institutet. ISBN 978-91-7357-370-2.
- Chandanos E, Lagergren J. Oestrogen and the enigmatic male predominance of gastric cancer. Eur J Cancer. 2008 Nov;44(16):2397-403.
- Qin J, Liu M, Ding Q, Ji X, Hao Y, Wu X, Xiong J. The direct effect of estrogen on cell viability and apoptosis in human gastric cancer cells. Mol Cell Biochem. 2014 Oct;395(1-2):99-107.
- Lee H-J, Yang H-K, Ahn Y-O; Yang; Ahn (2002). "Gastric cancer in Korea". Gastric Cancer 5 (3): 177–82. doi:10.1007/s101200200031. PMID 12378346.
- Crew K, Neugut A (January 2006). "Epidemiology of gastric cancer". World Journal of Gastroenterology 12 (3): 354–62. doi:10.3748/wjg.v12.i3.354. PMC 4066052. PMID 16489633.
- Hampel Howard, Abraham Neena S., and El-Serag Hashem B. (August 2005). "Meta-Analysis: obesity and the risk for gastroesophageal reflux disease and its complications". Annal of Internal Medicine 143 (3): 199–211. doi:10.7326/0003-4819-143-3-200508020-00006.
- "Hereditary Diffuse Cancer". No Stomach for Cancer. Retrieved 21 Oct 2014.
- "Gastric Cancer — Adenocarcinoma". International Cancer Genome Consortium. Retrieved 24 February 2014.
- "Gastric Cancer — Intestinal- and diffuse-type". International Cancer Genome Consortium. Retrieved 24 February 2014.
- Brooks-Wilson AR, Kaurah P, Suriano G, Leach S, Senz J, Grehan N, Butterfield YS, Jeyes J, Schinas J; Kaurah; Suriano; Leach; Senz; Grehan; Butterfield; Jeyes; Schinas; Bacani; Kelsey; Ferreira; MacGillivray; MacLeod; Micek; Ford; Foulkes; Australie; Greenberg; Lapointe; Gilpin; Nikkel; Gilchrist; Hughes; Jackson; Monaghan; Oliveira; Seruca; Gallinger; et al. (2004). "Germline E-cadherin mutations in hereditary diffuse gastric cancer: assessment of 42 new families and review of genetic screening criteria". Journal of Medical Genetics 41 (7): 508–17. doi:10.1136/jmg.2004.018275. PMC 1735838. PMID 15235021.
- Tseng C-H, Tseng F-H; Tseng (2014). "Diabetes and gastric cancer: The potential links". World J Gastroenterol 20 (7): 1701–11. doi:10.3748/wjg.v20.i7.1701. PMC 3930970. PMID 24587649.
- Crosby DA, Donohoe CL, Fitzgerald L, Muldoon C, Hayes B, O’Toole D, Reynolds JV; Donohoe; Fitzgerald; Muldoon; Hayes; O'Toole; Reynolds (2004). "Gastric Neuroendocrine Tumours". Digestive Surgery 29 (4): 331–348. doi:10.1159/000342988. PMID 23075625.
- Kim J, Cheong JH, Chen J, Hyung WJ, Choi SH, Noh SH; Cheong; Chen; Hyung; Choi; Noh (2004). "Menetrier's Disease in Korea: Report of Two Cases and Review of Cases in a Gastric Cancer Prevalent Region" (PDF). Yonsei Medical Journal 45 (3): 555–560. doi:10.3349/ymj.2004.45.3.555. PMID 15227748.
- Tsukamoto T, Mizoshita T, Tatematsu M; Mizoshita; Tatematsu (2006). "Gastric-and-intestinal mixed-type intestinal metaplasia: aberrant expression of transcription factors and stem cell intestinalization". Gastric Cancer 9 (3): 156–166. doi:10.1007/s10120-006-0375-6. PMID 16952033.
- Virmani, V; Khandelwal, A; Sethi, V; Fraser-Hill, M; Fasih, N; Kielar, A (2012). "Neoplastic stomach lesions and their mimickers: Spectrum of imaging manifestations". Cancer Imaging 12: 269–78. doi:10.1102/1470-7330.2012.0031. PMC 3458788. PMID 22935192.
- Xu ZQ, Broza YY, Ionsecu R; et al. (March 2013). "A nanomaterial-based breath test for distinguishing gastric cancer from benign gastric conditions". Br. J. Cancer 108 (4): 941–50. doi:10.1038/bjc.2013.44. PMC 3590679. PMID 23462808. [Breath Test Could Detect And Diagnose Stomach Cancer Lay summary] Check
value (help) – Medical News Today (6 March 2013).
- "Detection of precancerous gastric lesions and gastric cancer through exhaled breath". Gut. 13 April 2015. doi:10.1136/gutjnl-2014-308536.
- Inoue H, Kudo S-, Shiokawa A; Kudo; Shiokawa (2005). "Technology Insight: laser-scanning confocal microscopy and endocytoscopy for cellular observation of the gastrointestinal tract". Nature Clinical Practice Gastroenterology & Hepatology 2 (1): 31–7. doi:10.1038/ncpgasthep0072. PMID 16265098.
- Pentenero M, Carrozzo M, Pagano M, Gandolfo S; Carrozzo; Pagano; Gandolfo (2004). "Oral acanthosis nigricans, tripe palms and sign of leser-trelat in a patient with gastric adenocarcinoma". International Journal of Dermatology 43 (7): 530–2. doi:10.1111/j.1365-4632.2004.02159.x. PMID 15230897.
- Kumar; et al. (2010). Pathologic Basis of Disease (8th ed.). Saunders Elsevier. p. 784. ISBN 978-1-4160-3121-5.
- Kumar 2010, p. 786
- Lim JS, Yun MJ, Kim MJ, Hyung WJ, Park MS, Choi JY, Kim TS, Lee JD, Noh SH, Kim KW; Yun; Kim; Hyung; Park; Choi; Kim; Lee; Noh; Kim (2006). "CT and PET in stomach cancer: preoperative staging and monitoring of response to therapy". Radiographics 26 (1): 143–156. doi:10.1148/rg.261055078. PMID 16418249.
- "Detailed Guide: Stomach Cancer Treatment Choices by Type and Stage of Stomach Cancer". American Cancer Society. 2009-11-03.
- Guy Slowik (October 2009). "What Are The Stages Of Stomach Cancer?". ehealthmd.com.
- "Detailed Guide: Stomach Cancer: How Is Stomach Cancer Staged?". American Cancer Society.
- Paterson HM, McCole D, Auld CD; McCole; Auld (2006). "Impact of open-access endoscopy on detection of early oesophageal and gastric cancer 1994–2003: population-based study". Endoscopy 38 (5): 503–7. doi:10.1055/s-2006-925124. PMID 16767587.
- Crane SJ, Locke GR, Harmsen WS, Zinsmeister AR, Romero Y, Talley NJ; Locke Gr; Harmsen; Zinsmeister; Romero; Talley (2008). "Survival Trends in Patients With Gastric and Esophageal Adenocarcinomas: A Population-Based Study". Mayo Clinic Proceedings 83 (10): 1087–94. doi:10.4065/83.10.1087. PMC 2597541. PMID 18828967.
- Heise K, Bertran E, Andia ME, Ferreccio C; Bertran; Andia; Ferreccio (2009). "Incidence and survival of stomach cancer in a high-risk population of Chile". World Journal of Gastroenterology 15 (15): 1854–62. doi:10.3748/wjg.15.1854. PMC 2670413. PMID 19370783.
- Ford, AC; Forman, D; Hunt, RH; Yuan, Y; Moayyedi, P (20 May 2014). "Helicobacter pylori eradication therapy to prevent gastric cancer in healthy asymptomatic infected individuals: systematic review and meta-analysis of randomised controlled trials.". BMJ 348: g3174. doi:10.1136/bmj.g3174. PMC 4027797. PMID 24846275.
- Woo HD, Park S, Oh K, Kim HJ, Shin HR, Moon HK, Kim J (2014). "Diet and cancer risk in the Korean population: a meta- analysis" (PDF). Asian Pacific Journal of Cancer Prevention 15 (19): 8509–19. doi:10.7314/apjcp.2014.15.19.8509. PMID 25339056.
- Kong, P; Cai, Q; Geng, Q; Wang, J; Lan, Y; Zhan, Y; Xu, D (2014). "Vitamin intake reduce the risk of gastric cancer: meta-analysis and systematic review of randomized and observational studies.". PLOS ONE 9 (12): e116060. doi:10.1371/journal.pone.0116060. PMID 25549091.
- Bjelakovic, G; Nikolova, D; Simonetti, RG; Gluud, C (16 July 2008). "Antioxidant supplements for preventing gastrointestinal cancers.". Cochrane Database of Systematic Reviews (3): CD004183. doi:10.1002/14651858.CD004183.pub3. PMID 18677777.
- Bjelakovic, G; Nikolova, D; Gluud, LL; Simonetti, RG; Gluud, C (14 March 2012). "Antioxidant supplements for prevention of mortality in healthy participants and patients with various diseases.". Cochrane Database of Systematic Reviews 3: CD007176. doi:10.1002/14651858.CD007176.pub2. PMID 22419320.
- Roopma Wadhwa, Takashi Taketa, Kazuki Sudo, Mariela A. Blum, Jaffer A. Ajani; Taketa; Sudo; Blum; Ajani (2013). "Modern Oncological Approaches to Gastric Adenocarcinoma". Gastroenterology Clinics of North America 42 (2): 359–369. doi:10.1016/j.gtc.2013.01.011. PMID 23639645.
- Ke Chen, Xiao-Wu Xu, Ren-Chao Zhang, Yu Pan, Di Wu, Yi-Ping Mou; Xu; Zhang; Pan; Wu; Mou (2013). "Systematic review and meta-analysis of laparoscopy-assisted and open total gastrectomy for gastric cancer". World J Gastroenterol 19 (32): 5365–76. doi:10.3748/wjg.v19.i32.5365. PMC 3752573. PMID 23983442.
- Jennifer L. Pretz, Jennifer Y. Wo, Harvey J. Mamon, Lisa A. Kachnic, Theodore S. Hong; Wo; Mamon; Kachnic; Hong (2011). "Chemoradiation Therapy: Localized Esophageal, Gastric, and Pancreatic Cancer". Surgical Oncology Clinics of North America 22 (3): 511–524. doi:10.1016/j.soc.2013.02.005. PMID 23622077.
- Judith Meza-Junco, Heather-Jane Au, Michael B Sawyer; Au; Sawyer (2011). "Critical appraisal of trastuzumab in treatment of advanced stomach cancer". Cancer Management and Research 2011 (3): 57–64. doi:10.2147/CMAR.S12698. PMC 3085240. PMID 21556317.
- Sun, J; Song, Y; Wang, Z; Chen, X; Gao, P; Xu, Y; Zhou, B; Xu, H (December 2013). "Clinical significance of palliative gastrectomy on the survival of patients with incurable advanced gastric cancer: a systematic review and meta-analysis." (PDF). BMC Cancer 13 (1): 577. doi:10.1186/1471-2407-13-577. PMID 24304886.
- Scartozzi M, Galizia E, Verdecchia L, Berardi R, Antognoli S, Chiorrini S, Cascinu S; Galizia; Verdecchia; Berardi; Antognoli; Chiorrini; Cascinu (2007). "Chemotherapy for advanced gastric cancer: across the years for a standard of care". Expert Opinion on Pharmacotherapy 8 (6): 797–808. doi:10.1517/14656522.214.171.1247. PMID 17425475.
- Fusco N, Rocco EG, Del Conte C, Pellegrini C, Bulfamante G, Di Nuovo F, Romagnoli S, Bosari S (Jun 2013). "HER2 in gastric cancer: a digital image analysis in pre-neoplastic, primary and metastatic lesions". Mod Pathol 26 (6): 816–24. doi:10.1038/modpathol.2012.228. PMID 23348899.
- Cabebe, EC; Mehta, VK; Fisher, G, Jr (21 January 2014). Talavera, F; Movsas, M; McKenna, R; Harris, JE, ed. "Gastric Cancer". Medscape Reference. WebMD. Retrieved 4 April 2014.
- "WHO Disease and injury country estimates". World Health Organization. 2009. Retrieved Nov 11, 2009.
- Parkin DM, Bray F, Ferlay J, Pisani P; Bray; Ferlay; Pisani (2005). "Global Cancer Statistics, 2002". CA: A Cancer Journal for Clinicians 55 (2): 74–108. doi:10.3322/canjclin.55.2.74. PMID 15761078.
- "Are the number of cancer cases increasing or decreasing in the world?". WHO Online Q&A. WHO. 1 April 2008. Retrieved 2009-05-11.
- World Cancer Report 2014. International Agency for Research on Cancer, World Health Organization. 2014. ISBN 978-92-832-0432-9.
- Lozano, R; Naghavi, M; Foreman, K; Lim, S; Shibuya, K; Aboyans, V; Abraham, J; Adair, T; Aggarwal, R; Ahn, SY; Alvarado, M; Anderson, HR; Anderson, LM; Andrews, KG; Atkinson, C; Baddour, LM; Barker-Collo, S; Bartels, DH; Bell, ML; Benjamin, EJ; Bennett, D; Bhalla, K; Bikbov, B; Bin Abdulhak, A; Birbeck, G; Blyth, F; Bolliger, I; Boufous, S; Bucello, C; et al. (15 December 2012). "Global and regional mortality from 235 causes of death for 20 age groups in 1990 and 2010: a systematic analysis for the Global Burden of Disease Study 2010". Lancet 380 (9859): 2095–128. doi:10.1016/S0140-6736(12)61728-0. PMID 23245604.
- "PRESS RELEASE N° 224 Global battle against cancer won’t be won with treatment alone Effective prevention measures urgently needed to prevent cancer crisis" (PDF). World Health Organization. 3 February 2014. Retrieved 14 March 2014.
- "Gastric Cancer in Young Adults". Revista Brasileira de Cancerologia 46 (3). Jul 2000.
- "Health profile: United States". Le Duc Media. Retrieved 31 Jan 2016.
- "Health profile: China". Le Duc Media. Retrieved 31 Jan 2016.
- "Stomach Cancer: Death Rate Per 100,000". Le Duc Media. Retrieved 13 March 2014.
- "Stomach cancer statistics". Cancer Research UK. Retrieved 28 October 2014.
- Asombang Akwi W, Rahman Rubayat, Ibdah Jamal A (2014). "Gastric cancer in Africa: Current management and outcomes". World Journal of Gastroenterology 20: 3875–79. doi:10.3748/wjg.v20.i14.3875.
- Louw J. A., Kidd M. S. G., Kummer A. F., Taylor K., Kotze U., and Hanslo D. (November 2001). "The relationship between helicobacter pylori infection, the virulence genotypes of the infecting strain and gastric cancer in the African setting". Helicobacter 6 (4): 268–73. doi:10.1046/j.1523-5378.2001.00044.x.
- Withrow SJ, MacEwen EG, ed. (2001). Small Animal Clinical Oncology (3rd ed.). W.B. Saunders.
|Wikimedia Commons has media related to Stomach cancer.| | https://en.wikipedia.org/wiki/Gastric_cancer |
4.09375 | Top 10 Ways to Teach Vowel Pronunciation in English
Every ESL student should have a pronunciation element to his language studies. Sometimes, though, a student may need more than one strategy for tackling English pronunciation.
By making sure you use variety in your pronunciation lessons, your students will be more successful with English pronunciation and gain the confidence that comes with it.
How to Teach Vowel Pronunciation in English
Listen and repeat
This will be the first and most common method of teaching sound specific pronunciation in English. You say the target sound and have your students repeat it after you. If you are teaching a long word with multiple syllables, start with the final syllable of the word and have your class repeat it. Then add the penultimate syllable and say the two together having your class repeat after you. Work backwards in this manner until your students are able to pronounce the entire word correctly.
When working on a specific sound, it may help your students to isolate that particular sound from any others. Instead of presenting a certain sound as part of a complete word in English, you can simply pronounce the sound itself repeatedly. When you do, your students can say it along with you repeatedly, focusing on the small nuances in the correct pronunciation and also engraining the sound pattern into their minds. This is especially helpful when you have several students struggling with a specific sound delineation.
Minimal pairs are a great way to focus pronunciation on just one sound. If you are not familiar with linguistics, a minimal pair is two words that vary in only one sound. For example, rat and rate are minimal pairs because only the vowel sound differs between the two words. Additional minimal pairs are pin and pen, dim and dime, and bat and pat. You can use minimal pairs to help your students with their pronunciation by focusing on one particular sound. In addition to the pronunciation benefits, your students will also expand their vocabularies when you teach minimal pairs.
Record and replay
At times, your students may think they are using correct pronunciation when in fact they are saying something quite different. By using a device to record what your students are actually saying, you have empirical data to play back for each person. Encourage him to listen to what he actually said rather than what he thinks he said. You may also want him to compare a recording of a native speaker against his recording of himself. In this way, your students will have a more objective understanding of their true pronunciation and be able to take steps to correct it.
Use a mirror
Giving your students a chance to view their own physical movements while they are working on their pronunciation can be of great value. You can always encourage your students to look at your mouth and face as you pronounce certain sounds, but they will also benefit from seeing what movements they are making as they speak. Sometimes, becoming aware of the physical movements involved in pronunciation is all your students will need to correct pronunciation issues of which they are unaware.
When your students are facing a pronunciation challenge, it could be that English spelling is adding to the mystery of the spoken word. Instead of spelling new vocabulary out on the white board, try using phonetic symbols to represent the sounds (rather than the alphabet to represent the spelling). If you were to use phonetic symbols, the word seat would be written /si:t/ and eat would be written /i:t/. You can find a list of the phonetic symbols on several websites or in introductory linguistics books. Once you teach your students the International Phonetic Alphabet, you can use those symbols any time you introduce new vocabulary to your students.
Show a vowel diagram
If you are using phonetic symbols to help you teach vowel pronunciation, a diagram of where each English vowel sound is produced can be eye opening for your students. Print copies to distribute in class or show your students where they can find this diagram online. When students know which area of the mouth in which they should be making their sounds, they may have an easier time distinguishing between similar sounds because they are produced in different areas of the mouth.
Surprisingly enough, singing can be a good way for your ESL students to practice their vowel pronunciation. Because singing requires a person to maintain vowel sounds over more than just a moment, it can give your students a chance to focus in on the target sound and adjust what sound she is making.
Some pronunciation patterns are found consistently in students with the same native language. Being aware of these patterns is helpful in addressing problems your students may not even know they have. You can find practice exercises to target specific pronunciation patterns, or you can write your own to target the specific needs of your class. Either way, making students aware of pronunciation patterns of speakers of their native language can be the biggest help in eliminating the mispronunciations.
Whether you are teaching conversation or grammar, pronunciation will always come into play in any ESL class. By using various methods to aid your students, their pronunciation will be more accurate and their attitudes will be more positive.
Always remind your students that learning English takes time and acquiring pronunciation is a process. Encourage them that being aware of problems in pronunciation is the first half of correcting them!
Want more teaching tips like this?
Get the Entire BusyTeacher Library
Instant download. Includes all 80 of our e-books, with thousands of practical activities and tips for your lessons. This collection can turn you into a pro at teaching English in a variety of areas, if you read and use it. | http://busyteacher.org/8168-top-10-ways-teach-vowel-pronunciation-in-english.html |
4.09375 | Come enjoy the Academy for free this Sunday, February 7.
In ecosystems, every organism plays a part. From the smallest microbe to the fiercest predator to the tallest tree, each species contributes to making its community healthy. But this role isn't always obvious.
Take the colorful toucan and the palm tree Euterpe edulis in the Brazilian rainforest. Scientists have long understood that the palm's seeds are dispersed by not only the large birds, but smaller birds, too. The birds eat the seeds, fly-off and poop—spreading the palm seeds far and wide.
But the past 100 years have seen many changes in the rainforest. Since the 1800s, the forest has become more and more fragmented, mostly due to agricultural development such as the planting of coffee and sugar cane. By creating this patchwork of forest and farmland, humans have affected the rainforest in many ways.
According to a new study in the journal Science, the numbers of toucans have declined in the forest patches, and the palm trees in those areas have responded by producing much smaller seeds.
These palms generally produce different-sized seeds. Different sized-birds with different-sized beaks distribute the seeds evenly. But with the toucan and other large birds, such as large cotingas, absent from the ecosystem, only the small-seeded palm trees are reproducing. The birds are basically changing the evolutionary trajectory of these trees.
Researchers, led by Mauro Galetti from the Universidade Estadual Paulista in São Paulo, Brazil, collected more than 9,000 seeds from 22 different palm populations and used a combination of statistics, genetics, and evolutionary models to determine that forest fragmentation displaced many toucans. They also considered the influence many environmental factors, such as climate, soil fertility, and forest cover, but none could account for the change in palm seed size over the years in the fragmented forests.
For palm tree seeds, size matters. “Small seeds are more vulnerable to desiccation and cannot withstand projected climate change,” explains Galetti. The rainforest is projected to be drier as the climate warms, and the smaller seeds are less equipped than larger seeds for survival in these conditions.
See, every organism plays an important part.
“Unfortunately, the effect we document in our work is probably not an isolated case,” says Galetti. “The pervasive, fast-paced extirpation of large vertebrates in their natural habitats is very likely causing unprecedented changes in the evolutionary trajectories of many tropical species.”
Image: Lindolfo Souto | http://www.calacademy.org/explore-science/toucans-in-the-forest-ecosystem/ |
4.21875 | Lake Superior Chippewa
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (September 2012)|
The Lake Superior Chippewa (Anishinaabe: Gichigamiwininiwag) were a large historical band of Ojibwe (Anishinaabe) Indians living around Lake Superior; this territory is considered part of northern Michigan, Wisconsin, and Minnesota in the United States. They migrated into the area by the seventeenth century, encroaching on the Eastern Dakota people who historically occupied the area. The Ojibwe defeated the Eastern Dakota and had their last battle in 1745, after which the Dakota Sioux migrated west into the Great Plains. While sharing a common culture and Anishinaabe language, this group of Ojibwe was highly decentralized, with at least twelve independent bands in this region.
In the nineteenth century, the leaders of the bands negotiated together as the Lake Superior Chippewa with the United States government under a variety of treaties to protect their historic territories against encroachment by European-American settlers. The United States set up several reservations for bands in this area under the treaties culminating in one in 1854. This enabled the people to stay in this territory rather than remove west of the Mississippi River, as the government had attempted. Under the treaty, the bands with reservations have been federally recognized as independent tribes; several retain Lake Superior Chippewa in their formal names to indicate their shared culture.
Sometime earlier than 1650, the Ojibwe split into two groups near present-day Sault Ste. Marie, Michigan. This is believed to have been one of the stops which their prophets predicted in their migration; it was part of the path of the Anishinaabe, which they had traveled for centuries, in their passage west from the Atlantic Coast.
The Ojibwe who followed the south shore of Lake Superior found the final prophesied stopping place and "the food that grows on water" (wild rice) at Madeline Island. During the late 17th century, the Ojibwe at Madeline Island began to expand to other territory: they had population pressures, a desire for furs to trade, and increased factionalism caused by divisions over relations with French Jesuit missions. For a time they had an alliance with the Eastern Dakota.
Beginning about 1737, they competed for nearly 100 years with the Eastern Dakota and the Fox tribes in the interior of Wisconsin west and south of Lake Superior. The Ojibwe were technologically more advanced, having acquired guns through trading with the French, which for a time gave them an advantage. They eventually drove the Dakota Sioux out of most of northern Wisconsin and northeastern Minnesota into the western plains.
The Sioux (Lakota) were pushed to the west, where they eventually settled in the Great Plains of present-day Nebraska and the Dakotas. The Ojibwe successfully spread throughout the Great Lakes region, with colonizing bands settling along lakes and rivers throughout what would become northern Michigan, Wisconsin and Minnesota in the United States. La Pointe on Madeline Island remained the spiritual and commercial center of the nation. The island is listed on the National Register of Historic Places.
The Lake Superior Chippewa were numerous and contained many bands.
A separate sub-nation, known as the Biitan-akiing-enabijig (Border Sitters), were located between the Ojibwe of the Lake Superior watershed and other nations. The Biitan-akiing-enabijig were divided into three principal Bands:
- Manoominikeshiinyag (the "Ricing Rails" or St. Croix Chippewa Indians, in the St. Croix River valley);
- Odaawaa-zaaga'iganiwininiwag (the "Ottawa Lake Men", around Lac Courte Oreilles); and
- Waaswaaganiwininiwag (the "Torch Men", around Lac du Flambeau). Numerous sub-bands also existed.
Treaties and reservations
- Bois Forte,
- Muskrat Portage,
- Red Lake
- Pembina and
- La Pointe bands. The various villages had been politically independent and did not have a centralized tribal authority.
In the winter of 1851, President Zachary Taylor ordered the removal of the Lake Superior Chippewa west of the Mississippi River, as had already been forced on most other tribes in the east. During the course of it, the US Army attacked the people in what has become known as the Sandy Lake Tragedy, during which several hundred Chippewa died, including women and children. The La Pointe chief Kechewaishke (Buffalo) went to Washington, DC to appeal to the government for relief. National outrage had been aroused by the many deaths of the Ojibwe, and the US ended attempts at Ojibwe removal.
The final treaty in 1854 established permanent reservations in Michigan at L'Anse, Lac Vieux Desert, and Ontonagon. In 1934 under the Indian Reorganization Act, theKeweenaw Bay Indian Community was defined as successor apparent to the L'Anse, Lac Vieux Desert, and Ontonagon bands. Government functions were centralized with it, although all three reservations were retained. In 1988 the Lac Vieux Desert Band of Chippewa Indians succeeded in gaining federal recognition as a separate tribe. Together with the Keneewaw Bay tribe, it is part of the Inter-Tribal Council of Michigan, which represents 11 of the 12 federally recognized tribes in Michigan. These include tribes of Potowatomi and Odawa peoples who, together with the Ojibwe, have made up the Council of Three Fires.
In Wisconsin, reservations were established at Red Cliff, Bad River, Lac Courte Oreilles, and Lac du Flambeau. The St. Croix and Sokaogon bands, left out of the 1854 treaty, did not obtain tribal lands or federal recognition until the 1930s after the Indian Reorganization Act.
In Minnesota, reservations were set up at Fond du Lac and Grand Portage. Other bands, such as the Bois Forte Band, continued independent negotiations with the US government and ended political affiliation with the Lake Superior Chippewa.
Today the bands are politically independent and are federally recognized as independent tribes with their own governments. They remain culturally closely connected to each other. They have engaged in common legal actions concerning treaty rights, such as fishing for walleye. Many bands include "Lake Superior Chippewa" in their official tribal names to indicate their historic and cultural affiliations (Red Cliff Band of Lake Superior Chippewa, Fond du Lac Band of Lake Superior Chippewa, etc.)
Historical bands and political successors apparent are the following:
- Keweenaw Bay Indian Community, merged from
- Lac Vieux Desert Band of Chippewa
- La Pointe Band of Lake Superior Chippewa (historical): descendants are
- Lac Courte Oreilles Band of Lake Superior Chippewa Indians
- Lac du Flambeau Band of Lake Superior Chippewa
- St. Croix Band of Lake Superior Chippewa (historical): descendants are
- Mille Lacs Band of Ojibwe, merged from
- St. Croix Chippewa Indians of Wisconsin
- Sokaogon Chippewa Community
- Fond du Lac Band of Lake Superior Chippewa
- Grand Portage Band
- Bois Forte Band of Chippewa, merged from
- Lake Vermilion Band of Lake Superior Chippewa (historical)
- Little Forks Band of Rainy River Saulteaux (historical)
- Nett Lake Band of Rainy River Saulteaux (historical)
In addition to the full political Successors Apparent, the Mille Lacs Band of Ojibwe (via the St. Croix Chippewa Indians of Minnesota), Leech Lake Band of Ojibwe (via Removable Fond du Lac Band of the Chippewa Indian Reservation), and the White Earth Band of Chippewa (via the Removable St. Croix Chippewa of Wisconsin of the Gull Lake Indian Reservation) in present-day Minnesota retain minor Successorship to the Lake Superior Chippewa. They do not exercise the Aboriginal Sovereign Powers derived from the Lake Superior Chippewa.
- James K. Bokern, History and the Primary Canoe Routes of the Six Bands of Chippewa from the Lac Du Flambeau District, unpublished Masters Thesis, 1987, prepared under supervision at University of Wisconsin-Stevens Point.
- "Lac Flambeau Band of Lake Superior Chippewa", Great Lakes Inter-Tribal Council, 2005, accessed 1 September 2012
- "Lac Vieux Desert Band of Chippewa Indian Community", Inter-Tribal Council of Michigan, 2012
- Loew, Patty, 2001. Indian Nations of Wisconsin: Histories of Endurance and Renewal. Madison: Wisconsin Historical Society Press. | https://en.wikipedia.org/wiki/Lake_Superior_Chippewa_Tribe |
4.0625 | |You might also like:||Brain Cells||Brain Glossary||Label Brain Diagram Printout||Classroom activities and Links about the Brain||Structure and Function of the Human Brain||Today's featured page: Community Helpers Beginning Readers Books|
|Our subscribers' grade-level estimate for this page: 5th|
Label the Neuron
|axon - the long extension of a neuron that carries nerve impulses away from the body of the cell.
axon terminals - the hair-like ends of the axon
cell body - the cell body of the neuron; it contains the nucleus (also called the soma)
dendrites - the branching structure of a neuron that receives messages (attached to the cell body)
|myelin sheath - the fatty substance that surrounds and protects some nerve fibers
node of Ranvier - one of the many gaps in the myelin sheath - this is where the action potential occurs during saltatory conduction along the axon
nucleus - the organelle in the cell body of the neuron that contains the genetic material of the cell
Schwann's cells - cells that produce myelin - they are located within the myelin sheath.
|Search the Enchanted Learning website for:| | http://www.enchantedlearning.com/subjects/anatomy/brain/label/neuron.shtml |
4.03125 | Skip to main content
Wikispaces Classroom is now free, social, and easier than ever.
Try it today.
Pages and Files
Ways to Detect Radiation
Radiation can not be detected by sight, hearing, feeling, or smelling. Radiation can only be detected by radiation devices that warn people of the presence of radiation and to keep watch of their level of r effected radiation. The most common radiation detectors are Geiger counters, scintillation counters, and film badges. These devices function because of the effects of radiation when it strikes atoms. Ions are created when the radiation knocks electrons off the atoms. These ions can then be detected by radiation devices and can also be seen through a photographic plate that shows the areas exposed to radiation with color. This radiation that has sufficient energy to knock off electrons of atoms of the attacked substance to produce ions is called ionizing radiation. Ionizing radiation is radiation emitted by radioisotopes.
Geiger counters can detect gama, alpha and beta radiation. A geiger counter has a Geiger- Mueller tube, a visual readout, and an audio readout. The Geiger- Mueller tube is a metal tube filled with gas that detects the radiation. This metal tube has a central wire electrode that is connected to a power supply. At one end of the tube, there is a thin window which is made from a material that can be penetrated by alpha, beta, and gamma rays. When ionizing radiation penetrates the thin window at that end of the metal tube, the gas inside the tube becomes ionized creating electrons and ions. This ionized gas becomes an electrical conductor because of the ions and the free electrons produced. The flow of electrons produces an electrical current flowing between the tube and the inside wire. Current flows every time the gas-filled metal tube is exposed to radiation. The bursts of current are detected by the Geiger counter which records the amount of radiation detected by these bursts. The Geiger counter beeps when it detects a burst of current and calculated the amount of radiation.
Detecting radiation can also be accomplished with the use of a scintillator. A scintillator is a substance which emits light when struck by an ionizing particle. Scintillation counters have been functioned to detect all types of ionizing radiation, and is therefore the most useful.
A scintillation counter detects radiation by using a transparent crystal coat, usually phosphor, that emits bright flashes of light when ionizing radiation strikes the crystal. Electrons from the ionizing radiation effect are trapped into an excited state and emit a photon when they decay to the ground state. The emition of a photon causes a bright flash of light to appear called scintillations. The number of flashes, and each of their energies are detected electronically with the scintillation counter. The information is then converted into electronic pulses, which are measured and recorded.
A film badge consists of various layers of photographic film covered with black lightproof paper. These many layers of photographic film covered with black lightproof paper are all encased in a plastic or metal holder. This film badge is an important and useful radiation detector, epecially for people who work near any type of radiation source, such as technicians and radiologists. These people who work near a radiation source wear the film badge the whole time that they are at work. At specific intervals, the film is removed and developed depending on the frequency with the type of work being involved. The darkening of the film determines the strength and type of radiation that was exposed. These results are then recorded. Film badges can only monitor the degree of radiation exposure. Film badges can not protect a person from radiation exposure. Keeping a safe distance from the radiation source and using appropriate shielding is the only way to protect oneself from radiation.
What exactly are gamma, alpha, and beta rays in a brief definition, and why are they significant in Geiger Counter? In other words, does the Geiger Counter differentiate these types of rays? -Thanh
Alpha particles are fast-moving, high energy helium atoms, able to be stopped by a few inches of air or a piece of paper.
Beta particles are fast-moving, high energy electrons, able to be stopped by a few feet of air or a few millimeters of paper or plastic.
Gamma particles are fast-moving, high energy photons, able to be stopped by a few inches of lead.
I'm not sure about your last question, but I am almost certain that the geiger counter can't differentiate between these types of radiation.
Thanh, like Ben said, Geiger counters can not differentiate between the different types of radiation. It can only detect the amount of radiation. -Amal
Prentice Hall: Chemistry
help on how to format text
Turn off "Getting Started" | http://honchemistry.wikispaces.com/Ways+to+Detect+Radiation?responseToken=0cf102d34ae9720d2b5ef4ae8b1bb5597 |
4.03125 | War of the Pacific
||This article's introduction may be too long for the overall article length. (January 2016)|
|War of the Pacific|
Map showing changes of territory due to the War of the Pacific. Former maps (1879) show different lines of the border between Bolivia-Peru and Bolivia-Argentina.
|Commanders and leaders|
Presidents of Bolivia
|D.Santa María (1881–1886)|
7 Wooden ships
2 Torpedo boats
8 Wooden ships
10 Torpedo boats
|Casualties and losses|
Killed in action and Wounded:
The War of the Pacific (Spanish: Guerra del Pacífico), took place in western South America from 1879 to 1883, with Bolivia and Peru on one side and Chile on the other. The core issues were territorial claims and mining rights in the Atacama Desert; after the war ended with a Chilean victory, Chile gained a significant amount of land from Peru and Bolivia, making Bolivia a landlocked country.
Battles were fought in a variety of terrain, including the Pacific Ocean, the Atacama Desert, Peru's deserts, and mountainous regions in the Andes. For the first five months the war played out in a naval campaign, as Chile struggled to establish a sea-based resupply corridor for its forces in the world's driest desert. The war is a dramatic landmark in the history of South America and stands as one of the most significant military encounters of the late 19th century. It has attracted a considerable amount of scholarly interest.
In February 1878 Bolivia imposed a new tax on a Chilean mining company ("Compañía de Salitres y Ferrocarril de Antofagasta", CSFA) despite Bolivian express warranty in the 1874 Boundary Treaty it would not increase taxes on Chilean persons or industries for twenty-five years. Chile protested against the tax increase and solicited to submit it to mediation, but Bolivia refused and considered it a subject of Bolivia's courts. Chile insisted and informed the Bolivian government that Chile would no longer consider itself bound to the 1874 Boundary Treaty if Bolivia did not suspend enforcing the law. On February 14, 1879 when Bolivian authorities attempted to auction the confiscated property of CSFA, Chilean armed forces occupied the port city of Antofagasta.
Peru, bound with Bolivia by their Secret treaty of alliance between Peru and Bolivia of 1873, tried to mediate, but on 1 March 1879 Bolivia declared war on Chile and called on Peru to activate their alliance, while Chile demanded that Peru declare its neutrality. On April 5, after Peru refused this, Chile declared war on both nations. The following day, Peru responded by acknowledging the casus foederis.
Ronald Bruce St. John in "The Bolivia-Chile-Peru Dispute in the Atacama Desert" states:
Even though the 1873 treaty and the imposition of the 10 centavos tax proved to be the casus belli, there were deeper, more fundamental reasons for the outbreak of hostilities in 1879. On the one hand, there was the power, prestige, and relative stability of Chile compared to the economic deterioration and political discontinuity which characterised both Peru and Bolivia after independence. On the other, there was the ongoing competition for economical and political hegemony in the region, complicated by a deep antipathy between Peru and Chile. In this milieu, the vagueness of the boundaries between the three states, coupled with the discovery of valuable guano and nitrate deposits in the disputed territories, combined to produce a diplomatic conundrum of insurmountable proportions.
Afterwards, Chile's land campaign bested the Bolivian and Peruvian armies. Bolivia was defeated and withdrew after the Battle of Tacna on May 26, 1880. The Peruvian army was defeated in the Battle of Arica on June 7, 1880. The land campaign climaxed in 1881 with the Chilean occupation of Lima in January 1881. Peruvian army remnants and irregulars waged a guerrilla war against Chile. This Campaign of the Breña was a resistance movement, but did not change the war's outcome. After Peru's defeat at the Battle of Huamachuco in July 1883, Chile and Peru signed the Treaty of Ancón on October 20, 1883. Bolivia signed a truce with Chile in 1884.
Chile acquired the Peruvian territory of Tarapacá, the disputed Bolivian department of Litoral (cutting Bolivia off from the sea), as well as temporary control over the Peruvian provinces of Tacna and Arica. In 1904, Chile and Bolivia signed the "Treaty of Peace and Friendship" establishing definite boundaries. The 1929 Tacna–Arica compromise gave Arica to Chile and Tacna to Peru.
- 1 Etymology
- 2 Background
- 3 Crisis
- 4 War
- 5 Peace
- 6 Military analysis
- 7 Foreign intervention
- 8 Looting, damages and war reparations
- 9 Consequences
- 10 See also
- 11 Footnotes
- 12 References
- 13 Bibliography
- 14 External links
It is also known as the Saltpetre War, as the The Ten Cents War in reference to the controversial ten-centavo tax imposed by the Bolivian government, or as the The Second Pacific War. Not to be confused with the Saltpeter War (Mexico), a pre-Columbian war nor the "Guano War" as the Chincha Islands War is sometimes named.
Wanu (hispanicized guano) is a Quechua word for fertilizer. Potassium nitrate (ordinary saltpeter) and sodium nitrate (Chile saltpeter) are nitrogen-containing compounds collectively referred to as salpeter, saltpetre, salitre, caliche, or nitrate. They are used as fertilizer with also other important uses. Hence the words oficina, or oficina salitrera for saltpeter works.
The word Atacama had two meanings. It was and is a Chilean region, then province, (1. meaning) South of the Atacama desert (2. meaning). The Atacama desert mostly coincides with the disputed Antofagasta province, also named Litoral in Bolivia.
The Atacama border dispute between Bolivia and Chile concerning the sovereignty over the coastal territories between approximately the parallels 23°S and 24°S was just one of several long-running border conflicts in South America after the independence throughout the nineteenth century, since uncertainty characterized the demarcation of frontiers according to the Uti possidetis 1810.
The dry climate of the Peruvian and Bolivian coasts had permitted the accumulation and preservation of vast amounts of high-quality guano deposits and sodium nitrate. In the 1840s, Europeans knew the guano and nitrate's value as fertilizer and saltpeter's role in explosives. The Atacama Desert became economically important. Bolivia, Chile, and Peru were located in the area of the largest reserves of a resource the world demanded. During the Chincha Islands War (1864–1866), Spain, under Queen Isabella II, attempted to exploit an incident involving Spanish citizens in Peru to re-establish Spanish influence over the guano-rich Chincha Islands.
Starting from the Chilean silver rush in the 1830s, the Atacama desert was prospected and populated by Chileans. Chilean and foreign enterprises in the region eventually extended their control to the Peruvian saltpeter works. In the Peruvian region of Tarapacá, Peruvian people constituted a minority behind both Chileans and Bolivians.
Boundary Treaty of 1866
Bolivia and Chile, negotiated the "Boundary Treaty of 1866" ("Treaty of Mutual Benefits"). The treaty established the 24th parallel south, "from the littoral of the Pacific to the eastern limits of Chile", as their mutual boundary. The two countries also agreed to share the tax revenue from mineral exports from the territory between the 23rd and 25th parallel south. The bipartite tax collecting caused discontent, and the treaty lasted only 8 years.
Secret Alliance Treaty of 1873
In February 1873, Peru and Bolivia signed a treaty of alliance against Chile. Its last clause kept it secret as long as both parties considered its publication unnecessary; it was revealed in 1879. Argentina, involved in a long-standing dispute with Chile over the Strait of Magellan and Patagonia, was secretly invited to join the pact, and in September 1873 the Argentine Chamber of Deputies approved the treaty and $6,000,000 for war preparations Eventually Argentina and Bolivia did not agree about the territories of Tarija and Chaco, and the former feared a Chile-Brazil axis. The Argentine Senate postponed and later rejected the approval, but 1875 and 1877, after border disputes with Chile flared up anew, she sought to join the treaty. At the onset of the war, in a renewed attempt, Peru offered Argentina the Chilean territories from 24° to 27° if Argentina adhered the pact and fought the war.
Historians including G. Bulnes, Basadre, and Yrigoyen agree that the real intention of the treaty was to compel Chile to modify its borders according to the geopolitical interests of Argentina, Peru and Bolivia as Chile was militarily weak, that is, before the arrival of the Chilean ironclads Cochrane and Blanco Encalada.
Chile was not informed about the pact, but learned of it first cursorily through a leak in the Argentine Congress in September 1873, when Argentina's senate discussed the invitation to join the Peru-Bolivia alliance. Peruvian mediator Antonio de Lavalle stated in his memoirs that he did not learn of it until March 1879 and Hilarion Daza was only informed about the pact in December 1878.
Peruvian historian Basadre states that one of Peru's reasons for signing the treaty was to impede a Chile-Bolivia alliance against Peru that would have given to Bolivia the region of Arica (the vast majority of Bolivian commerce went through Peruvian ports of Arica before the war) and transferred Antofagasta to Chile. These Chilean offers to Bolivia, to change allegiance, were done several times even during the war and also from the Bolivian side at least six times.
On December 26, 1874, the recently built ironclad Cochrane arrived in Valparaíso; it remained in Chile until the completion of the Blanco Encalada. It threw the balance of south Pacific power towards Chile.
Historians disagree about how to interpret the treaty. Some Peruvian and Bolivian historians assess it as rightful, defensive, circumstantial, and known by Chile from the very onset. Conversely, some Chileans historians assess the treaty as aggressive against Chile, causative of the war, designed to take control by Peru of the Bolivian salitreras and hidden from Chile. The reasons for its secrecy, its invitation to Argentina to join the pact, and Peru's refusal to remain neutral are still being discussed.
Boundary Treaty of 1874
In 1874, Chile and Bolivia replaced the 1866 boundary treaty, keeping the boundary at 24th parallel but granting Bolivia the authority to collect all tax revenue between the 23rd and 24th parallels south. As compensation for the relinquishment of its rights, Chile did receive a 25 years guarantee against tax increases on Chilean commercial interests and their exports. All disputes arising under the treaty would be settled by arbitration.
Causes of the war
||This section may lend undue weight to certain ideas, incidents, or controversies. Please help to create a more balanced presentation. Discuss and resolve this issue before removing this message. (January 2016)|
||This article possibly contains original research. (January 2016)|
Some historians added that Chile was devastated by the economic crisis of the 1870s and was looking for a replacement for its silver, copper and wheat exports. It has been argued that the economic situation and the view of new wealth in nitrate was the true reason for the Chilean elite to go into war against Peru and Bolivia. The holder of the Chilean nitrate companies, says Sater, "bulldozed" the Chilean president Aníbal Pinto into declaring war in order to protect the owner of the CSFA and later to seize Bolivia's and Peru's salitreras. (Several members of the Chilean government were holders of the CSFA and it is believed that they did buy the services of one of the country's newspapers to push their case).
US historian Fredrick B. Pike calls this allegation "absurd":33 and W. Sater states that this interpretation overlooks certain important facts. The Chilean investors in Bolivia feared that Daza, the Bolivian dictator, would use, as he did, the war as an excuse to expropriate his investments. Among them were Melchor de Concha y Toro, the political powerful president of Chile's Camara de Diputados, Jerónimo Urmeneta,:105 and Lorenzo Claro, a Chilean founder of the Banco de Bolivia and a prominent member of the National Party (Chile, 1857–1933). A Santiago newspaper claimed that Melchor de Concha y Toro offered president Pinto $2,000,000 to end the dispute and return to the 1874 boundary line. "In others words", writes W. Sater, "there were as many powerful interests opposed to helping the Compañía de Salitres as there were those seeking to aid the corporation" Also, B. Farcau objects to the argument: "On the other hand, the sorry state of the Chilean armed forces at the outbreak of the war, as will be discussed in the following chapter, hardly supports a theory of conscious, premeditated aggression".
Sater cites also another sources that state that the true causes of the conflict are not economic but geopolitical: a struggle for control of the south-eastern portion of the Pacific Ocean. So in 1836 the Peruvian government tried to monopolize the commerce in the South Pacific by rewarding ships which sailed direct to Callao in detriment of Valparaíso, and Peru tried to impede the agreement reached between Spain and Chile in order to free its new warships built and embargoed in Britain during the Chincha War. Sater cites Germany's minister in Chile who argued that the war between Peru and Bolivia would "have erupted sooner or later, [and] on any pretext". He opined that Bolivia and Peru had developed a "bitter envy" against Chile and its material progress and good government. Frederik B. Pike states: "The fundamental cause for the eruption of hostilities was the mounting power and prestige and the economic and political stability of Chile, on one hand, and the weakness and the political an economic deterioration of Bolivia, on the other. ... The war — and its outcome — was as inevitable as the 1846—1848 conflict between the United States and Mexico. In both instances, a relatively well-governed, energetic, and economically expanding nation had been irresistibly tempted by neighboring territories that were underdeveloped, malgoverned, and sparsely occupied.":128
Another reason, states Sater, was Peru's want to monopolize and appropriate the nitrate works to strengthen its nitrate monopoly and in order to achieve it, the Bolivian and Chilean salitreras had to be controlled by Peru. As unenviable as Chile’s situation was, that of Peru was much worse. The 1870s was for Peru's economy "a decade of crisis and change". Nitrate extraction rose while guano exports, the source of substantial revenue for the Peruvian state, declined from 575,000 tons in 1869 to less than 350,000 tons in 1873, and the Chincha Islands and other guano islands were depleted or nearly so.
William Edmundson states in A History of the British Presence in Chile "Peru has its own reasons to enter the dispute. Rory Miller (1993) argues that the depletion of guano resources and poor management of the economy in Peru had provoked a crisis. This has caused Peru to default on its external debt in 1876, ... In that year the Peruvian government decided to procure a loan of seven millions pounds of which four millions pounds were earmarked to purchase privately owned oficinas [salitreras] ... and Peru defaulted again in 1877".
To increase guano revenue, Peru created a monopoly of nitrate commerce in 1875. The aims of the monopoly were to increase prices, curb exports, and so to impede competition. But most larger nitrate firms opposed the monopoly on sales of nitrate. When this was unsuccessful, in 1876, Peru began to expropriate nitrate producers, and to buy nitrate concessions such as that of Henry Meiggs in Bolivia ("Toco", south of the Loa River). But the CSFA was too expensive and could not be purchased. As Peruvian historian Alejandro Reyes states, this needed also the control over the Bolivian salitreras, and that was the internationalization of the conflict, because they were in the hands of Chilean and British capitalists. Later, as the Chilean company was to be auctioned on 14 February 1879 in Antofagasta, it was considered that the Peruvian consul would be the highest bidder.
But some sources, says Sater, see the declarations of war between Chile and Peru as a product of popular domestic forces, that is, the president had to enter into war or to abandon and cede. Sater cites the Britain minister in Lima, Spencer St. John: "the rival parties may try to make political capital out of jealousy for the national honor, and His Excellency [Peruvian President Prado] may be forced to give way to the popular sentiment." Pinto was under similar pressures. For Bruce Farcau, this seems to be the main cause for the war outbreak, as he states: "The argument that the attitude of the peoples of the region was just ripe for war seems best to fit the bill."
|Miguel Iglesias, later President of Peru. His son Alejandro was killed during the Battle of Miraflores.||Juan José Latorre participated in the shelling of Callao, where his brother, Elías Latorre, defended the forts of the harbor.|
The Ten Cents Tax
- The license of 27 November 1873
Since 1866 Chilean entrepreneurs José Santos Ossa and Francisco Puelma exploited deposits of sodium nitrate in Bolivian territories (salitreras "Las Salinas" and "Carmen Alto" 122 kilometres (76 mi) and 128 kilometres (80 mi) from Antofagasta respectively) secured by concessions from the then President of Bolivia Mariano Melgarejo. Later, in 1868, British capital was associated and founded the Compañía Melbourne Clark. The company obtained a license to construct a railroad from Antofagasta to Salinas, and the company was renamed to Compañía de Salitres y Ferrocarril de Antofagasta (CSFA), with a minority of British capital of 34% from the Antony Gibbs & Sons of London, which were also shareholder of salitreras in Peru. The company was established in Valparaíso, Chile. Its shareholders included a number of leading Chilean politicians. In 1871 a new Bolivian Government invalidated all contracts signed by Melgarejo. But on 22 November 1872 a Bolivian decree allowed the government to renegotiate the contracts, which is what the company and the Bolivian Government did. On 27 November 1873 the CSFA obtained from the Bolivian executive a license to exploit saltpeter duty-free for 15 years, but it was disputed whether the decree needed the authorization of the Bolivian Congress.[Notes 1] Some lawyers placed emphasis on con cargo a dar cuenta a la próxima legislatura (Spanish for: "to be accepted by the parliament"), while others on sólo en los casos de no avenimiento (Spanish for "only in conflict cases").
- The Article 4 of the 1874 Boundary Treaty
Since 1874 was in force the Chile-Bolivia Boundary Treaty of 1874, whose article 4 explicitly forbade further tax on Chilean enterprises for 25 years:
The duties of exportation that may be levied on minerals exploited in the zone referred to in the preceding articles shall not exceed those now in force, and Chilean citizens, industry, and capital shall not be subjected to any other contributions what ever except those now existing. The stipulations in this article shall last for twenty-five years.— Article 4, Chile-Bolivia Boundary Treaty of 1874
- The Peruvian Monopoly
In 1875 Peru's government expropriated the salitreras of Tarapacá in order to secure revenue from guano and nitrate by means of a monopoly, and in 1876 Antony Gibbs & Sons became consignee of the nitrate trade for the Peruvian government. President Mariano Ignacio Prado was "determined to complete the monopoly" and in 1876 Peru bought the nitrate licenses for "El Toco" auctioned by Bolivian decree of 13 January 1876.:69 But the Chilean CSFA remained the most serious competitor and clearly weakened Peru's monopoly. President Pardo, Prado's predecessor, had urged Gibbs to secure the monopoly by limiting CSFA's output, and in fact, Henry Gibbs had warned the CSFA's directory board, in a letter on 16 April 1878, that CSFA's refusal to limit the output would bring administrative trouble with Peru and Bolivia as long and as intensive as it is made more and more to the interest of a neighboring Government that they should be so.:64
- The tax and the Chilean refusal
In 1875 the city of Antofagasta had attempted to impose a 3 cents tax on the CSFA, but the Bolivian State Council (Consejo de Estado) headed by Serapio Reyes Ortiz, later Foreign Affairs Minister during the crisis, refused the tax, because it violated the license of 1873 and the Boundary Treaty of 1874.
On 14 February 1878 the National Congress of Bolivia and a National Constituent Assembly approved the 1873 license under the condition that the company would pay a 10 cents per quintal tax, but the company objected, citing the 1874 treaty, that the increased payments were illegal and demanded an intervention from the Chilean government.
The CSFA's directory board perceived the tax as a Peruvian move in order to displace Chileans from the nitrate production as occurred in Tarapacá in 1875 as the Peruvian Government expropriated the salitreras.
Having surrendered its claim to the disputed territories in return for a Bolivian promise not to increase tax, Chile responded that the treaty did not allow for such a tax hike. Bolivia suspended the tax in April 1878. In November Chile proposed a mediation and cautioned that Daza's refusal to cancel the tax would force Chile to declare null the 1874 Treaty. In December 1878 Bolivia, counting in its military alliance with Peru, challenged Chile and said the tax was unrelated to the treaty and that the claim of the CSFA should be addressed in Bolivian courts, and revived the tax. When the company refused to pay the tax, Bolivia confiscated its property on 11 February and threatened to sell it on 14 February in order to liquidate the tax's debt.
Occupation of Antofagasta
In December 1878, Chile dispatched a warship to the area. On 6 February, the Bolivian government nullified the CSFA's license of exploitation and confiscated the property. These news reached Valparaíso on 11 February and on this account the Chilean government decided on the occupation of Antofagasta. On the day of the planned auction, 200 Chilean soldiers arrived by ship at the port city of Antofagasta, and seized it having received no resistance. The occupying forces received widespread support from the local population, the majority of whom were Chilean. Antofagasta's population was 93–95% Chilean. On February 18, while in Antofagasta, Chilean colonel Emilio Sotomayor intercepted a letter from Bolivian president Hilarión Daza to Bolivian prefect-colonel Severino Zapata. The letter mentioned Daza's worry of Chilean interference with Bolivia's confiscation of saltpeter companies, and mentioned a previously secret treaty that Bolivia would, if necessary, demand that Peru honor should Chile declare war.
Peruvian mediation and Bolivian declaration of war
On 22 February, Peru sent a diplomatic team headed by José Antonio de Lavalle to Santiago, to act as mediator between the Chilean and Bolivian governments, meanwhile Peru ordered its fleet and army to prepare for war. He arrived in Valparaíso on 4. March. On February 27, Daza had made a public manifesto informing Bolivians about the occupation of Antofagasta and calling for patriotic support. The same day the Bolivian legislature authorized a formal declaration of war upon Chile, although it was not immediately announced. On March 1, Daza issued instead a decree which prohibited all commerce and communications with Chile "while the state-of-war provoked upon Bolivia lasts," provided Chileans ten days to leave Bolivian territory unless gravely ill or handicapped, embargoed Chilean furniture, property, and mining produce, allowed Chilean mining companies to continue operating under a government-appointed administrator, and provided all embargoes as temporary "unless the hostilities exercised by Chilean forces requires an energetic retaliation from Bolivia."
In Santiago, Lavalle asked for Chile's withdrawal from Antofagasta in order to transfer the province to a tripartite administration (Bolivia, Chile and Peru) without a Bolivian warranty to end the embargo nor of canceling the new tax.
Then, on March 14, in a meeting with foreign powers in Lima, Bolivia announced that a state of war existed with Chile. This declaration was aimed to impede further Chilean arms purchase in Europe and to scuttle the Peruvian mediation in Chile. Bolivia called on Peru to activate the alliance treaty, arguing that Chile's invasion constituted a casus foederis.
Also on March 14, Alejandro Fierro, Chile's minister of foreign affairs, sent a telegram to the Chilean representative in Lima, Joaquin Godoy, requesting immediate neutrality from the Peruvian government. On March 17, Godoy formally presented the Chilean proposal in a meeting with Peruvian President Prado.:147ff
On March 21, Godoy telegraphed the Chilean government about the secret Peru-Bolivia treaty, which had been revealed to him by Peruvian President Prado.:154ff
On March 23, while on their way to occupy Calama, 554 Chilean troops and cavalry defeated 135 Bolivian soldiers and civilians dug in at two destroyed bridges next to the Topáter ford. This Battle of Topáter was the first combat of the war.
When the Chilean government asked Lavalle directly and officially whether a defensive alliance existed that committed Peru to assist Bolivia in case of a war with Chile and whether Lima planned to honor this agreement, Lavalle could prevaricate no longer: he answered yes to both. Chilean president Pinto sought and received legislative approval to declare war, which he did on 5. April 1879. Peru responded on April 6, when President Prado declared the casus foederis.
Struggle for sea control
Given the few roads and railroad lines, the nearly waterless and largely unpopulated Atacama Desert was difficult to occupy. From the beginning naval superiority was critical. Bolivia had no navy, so on March 26, 1879 Hilarión Daza formally offered letters of marque to any ships willing to fight for Bolivia. The Armada de Chile and Marina de Guerra del Perú fought the naval battles.
Early on Chile blockaded the Peruvian port of Iquique, on April 5. In the Battle of Iquique (May 21, 1879), the Peruvian ironclad Huáscar engaged and sank the wooden Esmeralda; Meanwhile, in the Battle of Punta Gruesa, the Peruvian Independencia chased the schooner Covadonga until the heavier Independencia collided with a submerged rock and sank in the shallow waters near Punta Gruesa. In total, Peru stopped the blockade of Iquique, and Chileans lost the old Esmeralda. Nevertheless, the loss of the Independencia cost Peru 40% of its naval offensive power and made a strong impression upon military leaders in Argentina, hence an Argentine intervention in the war became far more remote.
Despite being outnumbered, the Peruvian monitor Huáscar held off the Chilean navy for six consecutive months and upheld Peruvian morale in the early stages of the conflict.:108
The capture of the steamship Rímac on July 23, 1879 while carrying a cavalry regiment (the Carabineros de Yungay) was the Chilean army's largest loss to that point. The loss led Admiral Juan Williams Rebolledo, chief of the Chilean Navy, on 17 August to resign. Commodore Galvarino Riveros Cárdenas replaced Williams, and he devised a plan to catch the Huáscar.
Meanwhile, the Peruvian navy had some other actions, particularly in August 1879 during the (unsuccessful) raid of the Union to Punta Arenas, located at the Strait of Magellan in an attempt to capture the British ship Gleneg which transported weapons and supplies for Chile.
|Cochrane||3,560||3,000||9–12.8||up to 9||6x9 Inch||1874|
|Blanco Encalada||3,560||3,000||9–12.8||up to 9||6x9 Inch||1874|
The Battle of Angamos, on October 8, 1879 proved decisive and Peru was reduced almost to land forces. In this battle, the Chilean Navy managed to capture the Huáscar after several hours of fierce battle, despite her remaining crew's attempts to scuttle her.
After Angamos and despite the lost of their two capital ships, Peruvians, with simple and ingenious tricks succeeded in sinking two important Chilean ships, the Loa (July 1880) and the Covadonga (August 1880). But its remaining units were locked in its main port during the long blockade of Callao.
On the other hand, the Chilean Navy captured the ship Pilcomayo in November 1879 and the torpedo boat Alay in December 1880.
After the battle of Angamos, once Chile achieved naval supremacy, the government had to decide where to strike. The options were Tarapacá, Moquegua, or directly Lima. Because of the proximity to Chile and the capture of the Peruvian Salitreras, Chile decided to occupy first the Peruvian Province of Tarapacá.
Campaign of Tarapacá
The Campaign of Tarapacá began on November 2, 1879, when nine steam transporters escorted by the half of the Chilean Navy transported ninety-five hundred men plus more than 850 animals to Pisagua, some 500 kilometres (310 mi) north of Antofagasta. After neutralizing the coastal batteries, the Chileans landed and attacked beach defenses in Pisagua. From Pisagua the Chileans marched south towards Iquique and on November 19, 1879, defeated the allied troops gathered in Agua Santa in the Battle of San Francisco and Dolores. Bolivian forces retreated to Oruro and the Peruvians fell back to Tiliviche, while the Chilean army captured Iquique.
Peruvian forces marched towards Arica to reach Bolivian troops led by Daza coming from Arica, but in Camarones Daza decided to return towards Arica.
A detachment of Chilean soldiers, with cavalry and artillery, was sent to face the Peruvian forces in Tarapacá (a little town with the name of the province). The two sides clashed on November 27 in the Battle of Tarapacá, where the Chilean forces were defeated, but the Peruvian forces, unable to maintain the territory, retreated north to Arica. Bruce W. Farcau comments that, "The province of Tarapacá was lost along with a population of 200,000 , nearly one tenth of the Peruvian total, and an annual gross income of ₤ 28 million in nitrate production, virtually all of the country's export earnings." The victory afforded Santiago an economic boon and a potential diplomatic asset.
- Falls of Prado in Peru and Daza in Bolivia
The Peruvian government was confronted with widespread rioting in Lima because of its failures. On December 18, 1879, Peruvian president Prado went from Callao to Panama, allegedly with six million pesos in gold, with the duty to oversee the purchase of new arms and warships for the nation. In a statement for the Peruvian newspaper El Comercio, he turned over the command of the country to vice president Luis La Puerta de Mendoza, but a coup d'état led by Nicolás de Piérola overthrew the government and took power on December 23, 1879.
In Bolivia, after receiving a telegram on December 27, informing him that the army had overthrown him, Daza departed to Europe with $500,000. General Narciso Campero became Bolivia's new president.
Campaign of Tacna and Arica
Meanwhile, Chile continued its advances in the Tacna and Arica Campaign. On November 28, Chile declared the formal blockade of Arica. On December 31, a Chilean force of 600 men carried out an amphibious raid at Ilo as a reconnaissance in force, to the north of Tacna and withdrew the same day.
On February 24, 1880 approximately 11,000 men in nineteen ships (protected by Blanco Encalada, Toro, and Magallanes and two torpedo boats) sailed from Pisagua and arrived off Punta Coles, near Pacocha, Ilo on February 26. The landing took several days without resistance. The Peruvian commander, Lizardo Montero, refused to try to drive the Chileans from the beachhead, as the Chileans had expected. On March 22, 3,642 Chilean troops defeated 1,300 Peruvian troops in the Battle of Los Ángeles, cutting any direct Peruvian supply from Lima to Arica or Tacna (supply was possible only through the long way over Bolivia). After the Battle of Los Ángeles, only three allied positions remained in southern Peru: General Leyva's 2nd Army at Arequipa (including some survivors from Los Ángeles), Bolognesi's 7th and 8th Divisions at Arica, and at Tacna the 1st Army. These forces were under Campero's direct command. However, they were unable to concentrate troops or even to move from their garrisons. After crossing 40 miles (64 km) of desert, on May 26 the Chilean army (14,147 men) destroyed the allied army of 5,150 Bolivians and 8,500 Peruvians in the Battle of Tacna. The need for a port near the army to supply and reinforce the troops and evacuate the wounded compelled the Chilean command to concentrate on the remaining Peruvian stronghold of Arica. On June 7, after the Battle of Arica, the last Peruvian bastion in the Tacna Department fell. After the campaign of Tacna and Arica, the Peruvian and Bolivian regular armies ceased to exist, and Bolivia effectively left the war.
To show Peru the futility of further resistance, on September 4, 1880 the Chilean government dispatched an expedition of 2,200 men to northern Peru under the command of Captain Patricio Lynch to collect war taxes from wealthy landowners. On September 10 Lynch's Expedition reached Chimbote and levied taxes of $100,000 in Chimbote, $10,000 in Paita, $20,000 in Chiclayo, and $4,000 in Lambayeque in local currencies; those who did not comply had their property impounded or destroyed. On September 11, the Peruvian government decreed that payment was an act of treason, but most landowners still paid. Lynch's mission, which infuriated Lima, was allowed by the international law. Chilean historian Barros Arana cites article 544 of Johann Caspar Bluntschli's Le droit international codifié, and Villalobos cites Andres Bello's Principios del derecho Internacional.
- Lackawanna Conference
On October 22, 1880, delegates of Peru, Chile, Bolivia held a 5-day conference aboard the USS Lackawanna in Arica arranged by the United States Minister Plenipotentiaries in Chile and Peru. The Lackawanna Conference, also called the Arica conference, attempted to develop a peace settlement.
Chile demanded the Peruvian Tarapacá province and the Bolivian Atacama, an indemnity of $20,000,000 gold pesos, restoration of property taken from Chilean citizens, the Rimac's return, abrogating the treaty between Peru and Bolivia and Peru's formal commitment not to mount artillery batteries in Arica's harbor. Arica was to be limited to commercial use only. Chile planned to retain the territories of Moquegua, Tacna, and Arica until all peace treaty conditions were satisfied. Although willing to accept the negotiated settlement, Peru and Bolivia insisted that Chile withdraw its forces from all occupied lands as a precondition for discussing peace. Having captured this territory at great expense, Chile declined the terms and the negotiations failed.
Campaign of Lima
After the campaign of Tacna and Arica the southern departments of Peru (Tacna, Arica and Tarapacá) were in Chilean hands, and the Lynch expedition was a proof that the army of Peru no longer possessed the skilled military manpower to defend the country. But nothing could convince the Peruvian government to sue for peace. The defeated allies not only failed to realize their situation but, despite the empty Bolivian treasury, on June 16, 1880, the Bolivian National Assembly voted to continue the war. On June 11, 1880, a document was signed in Peru declaring the creation of the United States of Peru-Bolivia. In fact, Bolivia could no longer fight and retired its army from the war. But Piérola continued the struggle. W. Sater states: Had Piérola sued for peace in June 1880, he would have saved countless Peruvian lives and the nation's treasure.
The Chilean government struggled to satisfy the public demands to end the war and to secure the peace. This situation forced the Chilean government to plan the occupation of Lima.
- Landings on Pisco, Chilca, Curayaco and Lurín
Lacking the ships to transport all the troops at once from Arica, the Chileans decided to land first a division and then the rest of the army in stages.
On 19 November 8,800 men, twenty cannon and their supplies reached Pisco, approximately 320 kilometres (200 mi) south of Lima. A party of 400 men was landed near the port which learned that a 3,000 man garrison defended Pisco. To avoid a fight during the landing, a Chilean vanguard was landed in Paracas, ten miles to the south, which captured Pisco and on November 20, the rest of the troops landed in Pisco to occupy later various coastal cities and also Ica.
On 2 December, 3,500 men and 416 horses disembarked in Pisco.
On 15 December, 14,000 men, 2,400 horses and mules, plus supplies departed Arica for the north. Baquedano, the Chilean commander, decided that only one brigade, Lynch's brigade, would march the 55 miles to Chilca and all others Chilean forces would be (re)embarked in Pisco for Chilca only 45 kilometres (28 mi) from Lima. They disembarked in Curayaco on 22 December 1880, slightly north of Chilca. The artillery was disembarked in Lurín.
- Battle of Chorrillos and Miraflores
The Chilean forces confronted virtually the entire civilian population of Lima. The irregulars defended prepared positions, supported by a collection of old coastal guns located a few miles from the capital's arsenal and supply depots. President Piérola ordered the construction of two parallel defense lines at Chorrillos and Miraflores a few kilometers south of Lima. The line of Chorrillos was 10 miles (16 km) long, lying from Marcavilca hill to La Chira, passing through the steep terrain of San Juan and Santa Teresa. The Peruvian forces were the approximately 25,000 to 32,000 men strong Army of Lima.
On January 13, 1881 Chilean troops charged 22,000-36,000 Peruvian defenders in Chorrillos. During the Battle of Chorrillos, the Chileans inflicted a harsh defeat and eliminated Lima's first defensive line. Two days later, the second line of defense was also defeated in the Battle of Miraflores,
- Occupation of Lima
The Chilean army entered Lima on January 17, 1881.
- Domingo Santa Maria elected President of Chile
- Boundary Treaty of 23 July 1881 between Chile and Argentina
Argentina had declared itself neutral at the onset of the war, but she allowed the transport of weapons to the allies over Argentine territories, exerted influence on the USA and European powers to detain the Chilean advance in the war, pleaded for monetary indemnification instead of cesion of territories to Chile and there was a strong drift in its public opinion in favor of Peru and Bolivia. Moreover, there were Peruvian and Bolivian hopes that Argentina could change its stance and enter in a war against Chile.
War in the Peruvian Sierra
- Collapse of the Peruvian State
After the confrontations in Chorrillos and Miraflores, Peruvian dictator Piérola refused to negotiate with the Chileans and escaped to the central Andes to try governing from the rear, but soon he lost the representation of the Peruvian state.(He left Peru in December 1881).
The collapse of national order brought on domestic chaos and violence, most of it motivated by class or racial divisions. Chinese and black laborers took the opportunity to assault haciendas and the property of the rich in protest of the mistreatment they had suffered in previous years, Lima's masses attacked Chinese grocery stores, and Indian peasants took over highland haciendas.:390-
The new Chilean administration continued to push for an end to the costly war. But contrary to expectations in Chile, neither Lima's capture nor the imposition of heavy taxes led Peru to sue for peace. Conversely, Peruvian caudillos advocated to wage a defensive war of attrition to consume Chile's power so much that they renounced their demand for territory.
On 22 February 1881 the pre Píerola Congress (allowed by Chile) reinstated the 1860 Constitution and chose Francisco García Calderón as provisional President but he, assisted by the US Minister in Lima, refused the cession of territories to Chile. He was overthrown by the Chileans in September 1881, but before his relegation to Chile he appointed Cáceres as successor, and Cáceres transferred it to Lizardo Montero.
The occupation commanders, first Manuel Baquedano, then Pedro Lagos and at last Patricio Lynch, sited his military headquarters in the Government Palace in Lima. 60% of the 15,000 Chilean occupation troops were stationed in Lima, Callao and Chorrillos.
The Peruvian caudillos organized a resistance which would come to be known as the Campaign of the Breña or Sierra, a widespread, prolonged, brutal and eventually futile guerrilla campaign. They harassed the Chilean troops and their logistics to such a point that Lynch had to send expeditions to the valleys in the Andes.
In February 1881, Chilean forces under Lt. Col. Ambrosio Letelier started the first Expedition, with 700 men, to defeat the last guerrilla bands from Huánuco (April 30) to Junín. After many losses the expedition achieved very little and returned to Lima in early July, where Letelier and his officers were courts-martialed for diverting money into their own pockets.
1882 Sierra Campaign
To annihilate the guerrillas in the Mantaro Valley, in January 1882 Lynch ordered an offensive with 5,000 men under the command of Gana and Del Canto, first towards Tarma and then southeast towards Huancayo, reaching Izcuchaca. Lynch's army suffered enormous hardships including cold temperatures, snow, and mountain sickness. On July 9, 1882 they fought the emblematic Battle of La Concepción. The Chileans had to pull back with a loss of 534 soldiers: 154 in combat, 277 of disease and 103 deserters.
Because Garcia Calderon refused to relinquish Peruvian control over Tarapacá Region, he was arrested. Before Garcia Calderon left Peru for Chile, he named Admiral Lizardo Montero as successor. At the same time President Piérola stepped back and supported Avelino Caceres for the Presidency. Caceres refused to serve and supported Lizardo Montero instead. Montero moved to Arequipa and in this way Garcia Calderon's arrest unified the forces of Piérola and Caceres.
1883 Sierra Campaign
On April 1, 1882 Miguel Iglesias, Defence Minister under Piérola, became convinced that the war had to be brought to an end or Peru would be completely devastated. He issued a manifesto, es:Grito de Montán calling for peace and in December 1882 convened a convention of representatives of the seven northern departments, where he was elected "Regenerating President" To support Iglesias against Montero, on April 6, 1883, Patricio Lynch started a new offensive to drive the guerrillas from central Peru and destroy Caceres' army. The Chilean troops pursued Caceres northwest through narrow mountain passes until July 10, 1883, winning the definitive Battle of Huamachuco, the final Peruvian defeat.
Last days of the war
Chile and Miguel Iglesias' Government signed the Peace Treaty of Ancón on October 20, 1883, that put an end to the war and ceded Tarapacá to Chile.
Lizardo Montero tried to resist in Arequipa with a force of 4,000 men, but when Chile's 3,000 fighters arrived from Mollendo, Moquegua and Ayacucho, and began the assault to Arequipa, the Peruvian troops mutinied against Montero and allowed the Chileans to occupy the city on 29 October 1883. Montero opted for Bolivian asylum. The occupation of Ayacucho by Chilean Colonel Urriola on 1. October lasted only 40 days then Urriola withdrew to Lima and Ayacucho was occupied by Cáceres' new army of 500 men. Caceres continued to refuse the cession of territories to Chile
But the basis of Cáceres' war, the increasingly powerful Indian insurrection against the Chileans, had changed the nature of the war. Indian guerrillas fought "white men from all parties", looted towns and seized land of the white owners. In June 1884, Cáceres accepted Treaty of Ancón, "as an accomplished fact" but he continued to fight Iglesias.
About Cáceres true reasons for his change of mind, Florencia Mallon says:
- Yet long before the civil war was over, it became clear to the hero of la Breña that, in order to build an alliance that would carry him to the presidential palace, he had to mend fences with the "hacendados" as a class, included those who had collaborated with the Chileans. The only way to do so was to give the "hacendados" what they wanted and repress the very guerrillas who had made the Breña campaign possible in the first place.
On October 29, 1883 the Chilean occupation of Lima ended, and on 4 August 1884 Lynch and the rest of the Chilean Expeditionary Forces embarked in Callao for Chile.:473
Peace treaty with Peru
On October 20, 1883 hostilities between Chile and Peru formally came to an end under the Treaty of Ancón. Under the treaty's terms, Peru formally ceded the province of Tarapacá to Chile. Chile was also to occupy the provinces of Tacna and Arica for 10 years, after which a plebiscite was to be held to determine nationality. For decades thereafter, the two countries failed to agree on the terms of the plebiscite. Finally, in 1929, through US mediation, under President Herbert Hoover, the Treaty of Lima was signed by which Chile kept Arica and Peru re-acquired Tacna.
Peace treaty with Bolivia
In 1884, Bolivia signed a truce, the Treaty of Valparaiso, that relinquished the entire Bolivian coast, the province of Antofagasta, and its nitrate, copper and other mineral deposits. The Treaty of Peace and Friendship (1904) made this arrangement permanent. In return, Chile agreed to build the Arica–La Paz railway, a railroad connecting the capital city of La Paz, Bolivia, with the port of Arica, and Chile guaranteed freedom of transit for Bolivian commerce through Chilean ports and territory.
Military strength comparison
As the war began, the Peruvian Army numbered 5,241 men of all ranks, organized in seven infantry battalions, three squadrons of cavalry and two regiments of artillery. The most common rifles in the army were the French Chassepot and the Minié rifles. The artillery, with a total of twenty-eight pieces, was composed mostly of British-made Blakely cannons and counted four machine guns. Much of the artillery dated from 1866, and had been bought for the Chincha Islands War against Spain. The mounts used by the cavalry were small and inferior to the Chileans'.
The Bolivian Army numbered no more than 2,175 soldiers, divided into three infantry regiments, two cavalry squadrons, and two sections of artillery. The Colorados Battalion, President Daza's personal guard, was armed with Remington Rolling Block rifles, but the remainder carried odds and ends including flintlock muskets. The artillery had three rifled pounders and four machine guns, while the cavalry rode mules given a shortage of good horses.
The regular Chilean Army was well equipped, with 2,694 soldiers. The regular infantry was armed with the modern Belgian Comblain rifle, of which Chile had a stock of some 13,000. Chile also had Gras, Minie, Remington and Beaumont rifles which mostly fired the same caliber cartridge (11 mm). The artillery had seventy-five artillery pieces, most of which were of Krupp and Limache manufacture, and six machine guns. The cavalry used French sabers and Spencer and Winchester carbines.
Control of the sea was Chile's key to an inevitably difficult desert war: supply by sea, including water, food, ammunition, horses, fodder and reinforcements, was quicker and easier than marching supplies through the desert or across the Bolivian high plateau. While the Chilean Navy started an economic and military blockade of the Allies' ports, Peru took the initiative and used its smaller navy as a raiding force. The raids delayed the ground invasion for six months, and forced Chile to shift its fleet from blockading to hunting and capturing the Huáscar. After achieving naval supremacy, sea-mobile forces proved to be an advantage for desert warfare on the long coastline. Peruvian and Bolivian defenders found themselves hundreds of kilometers from home while Chilean forces were usually just a few kilometers from the sea.
Chileans employed an early form of amphibious warfare, that saw coordination of army, navy and specialized units. The first amphibious assault of this war took place as 2,100 Chilean troops successfully took Pisagua on 2 November 1879. Chilean Navy ships bombarded beach defenses for several hours at dawn, followed by open, oared boats landing Army infantry and sapper units into waist-deep water, under enemy fire. An outnumbered first landing wave fought at the beach; the second and third waves in the following hours were able to overcome resistance and move inland. By the end of the day, an expeditionary army of 10,000 had disembarked at the captured port. In 1881 Chilean ships transported approximately 30,000 men, along with their mounts and equipment, 500 miles (800 km) in order to attack Lima. Chilean commanders were using purpose-built, flat-bottomed landing craft that would deliver troops in shallow water closer to the beach, possibly the first purpose-built amphibious landing craft in history: "These 36 shallow draft, flat-bottomed] boats would be able to land three thousand men and twelve guns in a single wave".
Chile's military strategy emphasized preemtion, offensive action, and combined arms. It was the first to mobilize and deploy its forces, taking the war immediately to Bolivian and Peruvian territories. It adopted combined arms strategy, employing naval and ground forces to rout its allied foes and capture enemy territory.:163 They landed ground forces in enemy territory to raid, landed in strength to split and drive out defenders and then garrisoned the territory as the fighting moved north. Chileans received the support of the Chinese coolies immigrants who had been enslaved by Peruvians, who joined the Chilean Army during the campaign of Lima and in the raids to the north Peruvian cities.
Peru and Bolivia fought a defensive war maneuvering through long overland distances and relying where possible on land or coastal fortifications with gun batteries and minefields. Coastal railways reached to central Peru and telegraph lines provided a direct line to the government in Lima.
The occupation of Peru between 1881 and 1884 took a different form. The war theater was the Peruvian Sierra, where the remains of the Peruvian Army had easy access to population, resource and supply centers far from the sea; supporting an indefinite war of attrition. The occupying Chilean force was split into small garrisons across the theater and could devote only part of its strength to hunting down dispersed pockets of resistance and the last Peruvian forces in the Sierra. After a costly occupation and prolonged counterinsurgency campaign, Chile sought a diplomatic exit. Rifts within Peruvian society and Peruvian defeat in the Battle of Huamachuco resulted in the peace treaty that ended the occupation.
Both sides employed late 19th-century military technology such as breech-loading rifles and cannons, remote-controlled land mines, armor-piercing shells, naval torpedoes, torpedo boats, and purpose-built landing craft. The second generation of ironclads (i.e. designed after the Battle of Hampton Roads) were employed in battle for the first time. That was significant for a conflict where no major power was involved, and attracted British, French, and U.S. observers. During the war, Peru developed the Toro Submarino ("Submarine Bull"). She never saw action, and was scuttled at the end to prevent her capture.
The USS Wachusett (1861) commanded by Alfred Thayer Mahan, was stationed at Callao, Peru, to protect American interests during the war's final stages. Mahan formulated his concept of sea power while reading history in a British gentlemen's club in Lima, Peru. This concept became the foundation for his celebrated The Influence of Sea Power upon History.
Flow of information
Since 1876 a submarine cable connected Valparaíso and Lima.:72 Later, at the beginning of the war, Antofagasta and Iquique became connected to the cable. Both navies tried to take control of the cable or severed it according to its military and naval interests.
Lima was not connected by cable to Panama, the southernmost post of the North American cable network. Valparaíso was connected to Buenos Aires by a cable over the Andes since July 26, 1872. Buenos Aires was connected over Uruguay, Brazil, to Portugal, Great Britain and from there to USA over a submarine cable. It must be emphasized that La Paz, Bolivia's capital city, was not connected by telegraph to the rest of the world. News coming from the Tacna, Arica, or Antofagasta to La Paz had to be brought by foot or by horse. The alternative way was from Peruvian port Mollendo (Querejazu: Moliendo) per railroad to Puno. Then per boat service to Chichilaya, at the Bolivian shore of the Titicaca see. The last route to La Paz was per horse or foot. The only telegraph in Bolivia was in Tupiza, 606 kilometres (377 mi) South from La Paz, as the crow flies. Tupiza is located at the border to Argentina and was connected to Buenos Aires per telegraph.
The traditional transport for long distances were the steamships which connected Valparaíso, Caldera, Antofagasta, Iquique, Arica and Lima to the rest of the world.
The disruption of maritime trade routes and the unavailability of submarine telegraph cables from and in the war zone presented special problems for the press coverage of the war. On the other hand, the west coast was important for investors, farmers, manufacturers, and government officials because of the financial commitments. Hence, the London The Times as well as The New York Times covered as far as possible the events of the war, in the absence of their own correspondents and culled from Government representatives in Europe and the USA, merchant houses, the Lloyd of London, from articles of the Panama Star and Herald, and from Reuters. The result was a mix of brief telegraphic dispatches a few days old from cities with cable stations, and three or four weeks old lengthier reports carried by steamships to London or New York. For example, the Battle of Iquique occurred on 21 May, but first mention of it appeared in the May 30 edition of both The Times and The New York Times with an incorrect message. Not until June 17. was provided in The Times a reasonable accurate version of the battle.:72–74
At the onset of the war 30,000 Chileans were expelled from Peru (within 8 days) and Bolivia (within 10 days) and their property confiscated; most of them had to shelter in the camps, boats and pontoons of the Peruvians ports until they were transported by ship to Antofagasta. It is calculated that 7,000 of the refugees from Peru enlisted in the Chilean battalions and their resentfulness would later have an impact on the war. Peruvian and Bolivian residents in Chile were not expelled.
Both sides complained, citing eyewitness accounts, that the other side had killed, after the battle, wounded soldiers,
Beside the Peruvian-Chilean slaughter in the irregular war after the occupation of Lima, in Peru an ethnic and social conflict was simmering between its indigenous and (Chinese) coolies who had been enslaved by Peruvians population and its white criollo and mestizo Upper class. On 2 July 1884 the guerrillero Tomás Laymes and three of his men were executed in Huancayo by Caceres's forces, because of the atrocities and crimes committed by the guerrillas against the Peruvian inhabitants of the cities and hamlets. In Ayacucho, Indigenous peoples stood up against "the whites" and in Chincha the Afro-Peruvians gang together against their owners in the Haciendas of "Larán", "San José" and "Hoja Redonda". Only the Peruvian army could forcibly suppress the revolt. Chinese coolies formed the battalion "Vulcano" within the Chilean Army. There were also inter-ethnic tensions under blacks and coolies. In Cañete, 2000 coolies from the Haciendas "Montalbán" and "Juan de Arona" were massacred by black people.
British historian B. Farcau states: Contrary to the concept of the "merchants of death," the arms manufacturers of Europe and the United States conniving to keep alive the conflict, from which they had earned some welcome sales of their merchandise, the most influential foreign businessmen and their respective consuls and embassadors were the traders in nitrate and the holders of the growing stacks of debts of all the belligerents. They were all aware that the only way they could hope to receive payment on their loans and earn the profits from the nitrate business was to see the war ended and trade resumed on a normal footing without legal disputes over ownership of the resources of the region hanging over their heads
Nonethelesses, belligerents were able to purchase torpedo boats, arms and munitions abroad and to circumvent ambiguous neutrality laws and firms like Baring Brothers in London were not averse to dealing with both Chile and Peru.:129 Arms were sold freely to whichever side could pay for them. (But not British warships). For example, in the 1879–1880 period Peru acquired weapons from the United States, Europe, Costa Rica and Panama. Weapons offloaded on the Caribbean coast of Panama were sent overland to the Pacific coast by the isthmus railway. In the Pacific a number of ships including Talismán, Chalaco, Limeña, Estrella, Enriqueta and Guadiana transported the cargo to Peru. This trade was done with the consent of the president of the Sovereign State of Panamá (then part of Colombia). The Chilean consul in Panama persistently protested against this trade citing a Chile–Colombia agreement of 1844 that prohibited Colombia from providing war supplies to Chile's enemies.
After the Chilean occupation of Arica, Tarapacá and Antofagasta, the governments of Peru and Bolivia turned to the United States of America to block Chilean annexation of the occupied territories as their last hope.:41 US diplomats were worried that European powers might be tempted to intervene in the Pacific. The Bolivian Minister in Washington offered US-Secretary of State William Maxwell Evarts the prospects of lucrative Guano and Nitrate concessions to US investors in return for official protection of Bolivia's territorial integrity.:131:42 Isaac P. Christiancy, US Minister in Peru, organized the USS Lackwanna conference, which failed as no one of the belligerents was ready to negotiate their pretensions. Earlier, Christiancy had written to the USA that Peru should be annexed for a period of ten years, then admitted in the Union to provide the United States with access to the rich markets of South America.:42 In 1881 James Garfield took the oath of office in the USA and his anglophobic Secretary of State James G. Blaine was a proponent of an assertive role for the USA in the War of the Pacific:43 ostensible regarding the interests of promoter of US ownership of nitrate and guano concesions.:132 Blaine argued that the South American republics "are young sisters of this government" and he would not tolerate European intervention in South America. The groups "Credit Industriel" and "Peruvian Company", representing European and American creditors, had guaranteed to the Peruvian provisional government of García Calderón to pay the Peruvian external debt and the reparations to Chile, but in return the Peruvian government would have to grant mining concessions in Tarapacá to these corporations. With the acquiescence of García Calderón both companies began to lobby in the United States for the territories to remain under Peruvian sovereignty. The US "Levi P. Morton, Bliss and Company" would get the monopoly of the sales of Peruvian nitrate in the USA. Beside the economic plans, Stephen A. Hurlbut, (Christiancy's successor) had negotiated with García Calderón the cesion to USA of a Naval Base in Chimbote and the railroads to the Coal mines upcountry. When it became known that Blaine's representative in Peru, Hurlbut, would personally profit from the settlement, it was clear he was complicating the peace process These American attempts reinforced Garcia Calderon's refusal to discuss the matter of territorial cession. At the end of 1881 Blaine dispatched William H. Trescott in a mission to Chile that should establish that problems would be resolved through arbitration and acts of war would not justify territorial seizures.:132 After the assassination of Garfield (2. Juli 1881) and the accession of Chester A. Arthur to the presidency, Blaine was replaced by Frederik T. Frelinghuysen as secretary of state. Frelinghuysen thought that the USA was in no position to back Blaine's policy and recalled the Trescott mission. Kenneth D. Lehmann states about the USA policy:
- Washington had interjected itself into the middle of the controversy without developing a realistic position: the moralizing of the United States had an air of hypocrisy in light of his own history, and veiled threats carried no weight.:45
Regarding a British intervention in the war, British marxist historian Victor Kiernan states: "It should be emphasized that the Foreign Office never at any time contemplated any kind of active intervention. ... It was especially scrupulous in seeing to it that no warships were smuggled out for sale to either side, for it was in mortal dread of another Alabama Award." During the war the British government embargoed four warships sold to Chile and Perú[Notes 2]
Looting, damages and war reparations
The case of lootings and war reparations done by Chilean occupation forces in Peru has caused controversy between historians: overlooked in Chile and source of Anti-Chilean sentiment in Peru. The Chilean historian Milton Godoy Orellana distinguishes four events: 1) Looting after the batlle of Chorillos y Miraflores 2) Looting by Peruvians in Lima before the Chilean troops entered the city 3) The Chilean confiscation of locomotives, rails, printing machines, weapons, etc. This expropriation proceeding was, in the law of war of the 19th century, allowed. The Chilean government tried to control it through the "Oficina Recaudadora de las Contribuciones de Guerra" those tasks were: to inventory, to realize the confiscation, to record and to confirm the transport to Chile, and the destination, and the sender. Allegedly, the strategic purposes were to obtain the peace. There is no general list of the confiscated goods, but many of the shipments were registered in private and official letters, newspaper articles, manifests, etc. 4) The seizure of cultural assets of Peru by the Chileans and Peruvians. The development of international law regarding the protection of cultural objects evolved over the 19th and 20th centuries, but the idea of protecting cultural assets first emerged in Europe during the 18th century. The Lieber Code of 1863, while it unconditionally protected works of art during an armed conflict (Art. 35), expressly consented to the utilization of cultural property as war reparations (Art.36). In fact, Sergio Villalobos states that the USA accepted in 1871 the confiscation of art works but the 1874 Project of an International Declaration concerning the Laws and Customs of War asserted that the cultural assets were to be considered as protected. In March 1881 the Chilean Government of Lima began to seize the Biblioteca Nacional del Perú, 45,000 books were seized, but it is a fact that some of the books were sold in Lima by Peruvians, hence it is contested how much of the booty was taken by the Chilean forces. In any case, in late March 1881, the part of the books arrived to Chile and the press began to inform and discuss about the legitimacy of confiscation of oil paintings, books, statues, etc., or "international robbery" as a journalist of "La Epoca" described it. On 4. January 1883 in a session of the Chilean Congress, the deputy Augusto Matte Pérez questioned Minister of the Interior José Manuel Balmaceda about the "opprobrious and humiliating" shipments of Peruvian cultural assets. Deputy Montt asked the devolution of the assets and was supported by deputies McClure and Puelma. The minister vowed to impede further exactions and to repatriate the objects mentioned in the discussion. Apparently he did it, because the shipments stopped and the mentioned statues are not any more in that place. But not until November 2007 did Chile return 3,778 stolen books to the Biblioteca Nacional del Perú. S. Villalobos asserts that "There was no justification for the theft".
Another issue was the damage due to acts of war on properties owned by citizens of neutral countries. In 1884 were constituted the Tribunales Arbitrales with a Chilean judge, a judge named by the country of the claimant, and a Brazilian judge, to deal with the claims of Great Britain (118 claims), Italian (440 claims) and French (89 claims) citizens, and 1886 for German citizens. The "Italian" tribunal dealt also for Belgian citizens and the "German" tribunal for Austrian and Swiss citizens. Spaniards accepted the decision of the Chilean state without tribunal assistance and the USA did not agree at that time. According to the international law, claims of foreign citizens with animus manendi in the war zone, claims were the damaged place has been a battleground (among others: Arica, Chorrillos and Miraflores and Pisagua and Tacna were in a similar situation), and damages caused by individual or scattered soldiers were dismissed. Only 3,6% (CLP 1,080,562) of the demanded value was recognized by the tribunals. According to Villalobos, the sentences proves that the accusations against the Chilean forces were exaggerated by Peruvians because of wounded pride and by foreign citizens because of monetary interests.
The war had a profound and long lasting impact on the societies of the involved countries. The peace negotiations continued until 1929 but the war was finally over in 1884 for all practical purposes.
Media related to War of the Pacific at Wikimedia Commons
- The Bolivian law of 22 November said (Querejazu 1979, pp. 181–182): Se autoriza al Ejecutivo para transar sobre indemnización y otros reclamos pendientes en la actualidad, y para acordar con las partes interesadas la forma más conveniente en que habrán de llenarse sus obligaciones respectivas; defiriéndose estos asuntos, sólo en los casos de no avenimiento, a la decisión de la Corte Suprema, con cargo a dar cuenta a la próxima legislatura.
- The cruisers Arturo Prat y Esmeralda built in England for Chile and the es:BAP Lima (Sócrates) and the USS Topeka (PG-35) (Diógenes) built in Germany but armed in Britain for Perú. The greek names were a legend to conceal their real destination.
- Sater 2007, p. 51 Table 2
- Sater 2007, p. 45 Table 1
- Sater 2007, p. 74
- Sater 2007, p. 274
- Sater 2007, p. 58 Table 3
- Sater 2007, p. 263
- Sater, pp. 349 Table 23.
- Sater, pp. 348 Table 22. The statistics on battlefield deaths are inaccurate because they do not provide follow up information on those who subsequently died of their wounds.
- Messenger, Charles (31 October 2013). Reader's Guide to Military History. Routledge. pp. 549–. ISBN 978-1-135-95970-8.
- The Bolivia-Chile-Peru Dispute in the Atacama Desert, Ronald Bruce St. John, page 12-13
- Joao Resende-Santos (23 July 2007). Neorealism, States, and the Modern Mass Army. Cambridge University Press. ISBN 978-1-139-46633-2.
- The Guano War of 1865-1866, retrieved on 22 December 2014
- Teofilo Laime Ajacopa, Diccionario Bilingüe Iskay simipi yuyayk'ancha, La Paz, 2007 (Quechua-Spanish dictionary) 0
- Arie Marcelo Kacowicz (1998). Zones of Peace in the Third World: South America and West Africa in Comparative Perspective. SUNY Press. pp. 105–. ISBN 978-0-7914-3957-9.
- Bethell, Leslie. 1993. Chile Since Independence. Cambridge University Press. pp. 13–14.
- Vergara, Jorge Iván; Gundermann, Hans (2012). "Constitution and internal dynamics of the regional identitary in Tarapacá and Los Lagos, Chile". Chungara (in Spanish) (University of Tarapacá) 44 (1): 121. doi:10.4067/s0717-73562012000100009.
- Farcau 2000, p. 37
- Carlos Escudé y Andrés Cisneros, Historia de las Relaciones Exteriores Argentinas, Sarmiento y Tejedor proponen al Congreso la adhesión al tratado secreto peruano-boliviano del 6 de febrero de 1873, retrieved on 13 November 2013, archiveurl="https://web.archive.org/web/20131113210142/http://www.argentina-rree.com/6/6-066.htm"
- R.Querejazu 1995 Cap. XXVII La Alianza secreta de Bolivia y el Peru
- http://www.argentina-rree.com/6/6-081.htm Historia de las Relaciones Exteriores Argentinas, La misión Balmaceda: asegurar la neutralidad argentina en la guerra del Pacífico, Carlos Escudé y Andrés Cisneros
- Emilio Ruiz-Tagle Orrego (1992). Bolivia y Chile: el conflicto del Pacífico. Andres Bello. pp. 149–. ISBN 978-956-13-0954-8.
- Bulnes 1920, p. 57 Bulnes dice:
- The synthesis of the Secret Treaty was this: opportunity: the disarmed condition of Chile; the pretext to produce conflict: Bolivia; the profit of the business: Patagonia and the salitre; (Traducción: La síntesis del tratado secreto es: oportunidad: la condición desarmada de Chile; el pretexto para producir el conflicto: Bolivia; la ganancia del negocio: Patagonia y el salitre;)
- Basadre 1964, p. Cap.1, pág.12, La transacción de 1873 y el tratado de 1874 entre Chile y Bolivia Basadre dice:
- La gestión diplomática peruana en 1873 ante la Cancillería de Bolivia fue en el sentido de que aprovechara los momentos anteriores a la llegada de los blindados chilenos para terminar las fatigosas disputas sobre el tratado de 1866 y de que lo denunciase para sustituirlo por un arreglo más conveniente, o bien para dar lugar, con la ruptura de las negociaciones, a la mediación del Perú y la Argentina. o en
- La alianza al crear el eje Lima-La Paz con ánimo de convertirlo en un eje Lima-La Paz-Buenos Aires, pretendió forjar un instrumento para garantizar la paz y la estabilidad en las fronteras americanas buscando la defensa del equilibrio continental como había propugnado "La Patria" de Lima.(Cap. 1, pág. 8) anteriormente Basadre expuso lo explicado por "La Patria":
- El Perú, según este articulista, tenía derecho para pedir la reconsideración del tratado de 1866. La anexión de Atacama a Chile (así como también la de Patagonia) envolvía una trascendencia muy vasta y conducía a complicaciones muy graves contra la familia hispanoamericana. El Perú defendiendo a Bolivia, a sí mismo y al Derecho, debía presidir la coalición de todos los Estados interesados para reducir a Chile al límite que quería sobrepasar, en agravio general del uti possidetis en el Pacífico. La paz continental debía basarse en el equilibrio continental. ... Se publicaron estas palabras en vísperas de que fuese suscrito el tratado secreto peruano-boliviano.(Cap. 1, pág. 6)
- Yrigoyen 1921, p. 129 Yrigoyen dice:
- Tan profundamente convencido estaba el gobierno peruano de la necesidad que había de perfeccionar la adhesión de la Argentina al Tratado de alianza Peru-boliviano, antes de que recibiera Chile sus blindados, a fin de poderle exigir a este país pacíficamente el sometimiento al arbitraje de sus pretensiones territoriales, que, apenas fueron recibidas en Lima las observaciones formuladas por el Canciller Tejedor, se correspondió a ellas en los siguientes términos ...(pág.129)
- R.Querejazu 1995 Cap. XXVII, La maniobra leguleyesca
- Basadre & 1964 Chapter 1, "Significado del tratado de la alianza"
- Jefferson Dennis 1927, p. 80, Sotomayor letter urging Bolivia to break her alliance with Peru
- Basadre 1964, p. 2282 "The beginning of the Peruvian naval inferiority and lack of initiative for preventive war"
- Nicolás Cruz; Ascanio Cavallo (1981). Las guerras de la guerra: Perú, Bolivia y Chile frente al conflicto de 1879. Instituto Chileno de Estudios Humanísticos.
- Sater 2007, p. 37
- Historia contemporánea de Chile III. La economía: mercados empresarios y trabajadores. 2002. Gabriel Salazar and Julio Pinto. p. 25-29.
- Salazar & Pinto 2002, pp. 25–29.
- Pinto Rodríguez, Jorge (1992), "Crisis económica y expansión territorial : la ocupación de la Araucanía en la segunda mitad del siglo XIX", Estudios Sociales 72
- Fredrick B. Pike, "Chile and the United States, 1880-1962", University of Notre Dame Press 1963, page 33:
- The economic development that accompanied and followed the War of the Pacific was so remarkable that marxist writers feel justified in alleging that Chile's great military adventure was instigated by self-seeking capitalists in order to bring their country out of the business stagnation that had begun in 1878. However absurd this allegation, the truth is that the war did provide Chile with the economic means for coming of age.
- Mauricio Rubilar Luengo, "La Política exterior de Chile durante la guerra y postguerra del Pacífico", pdf, Universidad de Vallalodid
- Sater 2007, p. 38
- Farcau 2000, p. 45
- Leslie Bethell, The Cambridge History of Latin America, 2009, page 541
- Sater 2007, p. 38,39
- Fredrick B. Pike (1 January 1977). The United States and the Andean Republics: Peru, Bolivia, and Ecuador. Harvard University Press. ISBN 978-0-674-92300-3.
- Peruvian historian Alejandro Reyes Flores, Relaciones Internacionales en el Pacífico Sur, in La Guerra del Pacífico, Volumen 1, Wilson Reategui, Alejandro Reyes & others, Universidad Nacional Mayor de San Marcos, Lima 1979, page 110:
- Jorge Basadre respecto a este problema económico crucial dice Al realizar el estado peruano con la ley del 28 de marzo de 1875, la expropiación y monopolio de las salitreras de Tarapacá, era necesario evitar la competencia de las salitreras del Toco [in Bolivia].... Aquí es donde se internacionalizaba el conflicto, pues estas salitreras, económicamente estaban en poder de chilenos y británicos
- Greenhill, Robert and Miller, Rory. (1973). The Peruvian Government and the Nitrate Trade, 1873–1879. Journal of Latin American Studies 5: pp 107–131.
- A History of the British Presence in Chile: From Bloody Mary to Charles Darwin and the Decline of British Influence, William Edmundson, 2009, ISBN 0230101216, 9780230101210, 288 pages, page 160
- Harold Blackmore, The Politics of Nitrate in Chile, Pressure Groups and Policies, 1870-1896, Some Unanswered Questions
- Querejazu 1979, p. 175
- Querejazu 1979, p. 211
- Sater 2007, p. 39
- Sater 2007, p. 40
- Merlet Sanhueza, Enrique (1997). Juan José Latorre: héroe de Angamos. Editorial Andrés Bello. p. 31. Retrieved 23 June 2015.
- Luis Ortega, "Los Empresarios, la política y los orígenes de la Guerra del Pacífico", Flacso, Santiago de Chile, 1984, page 17
- Sater 2007, p. 31
- Collier, Simon (1996). A History of Chile, 1808-1994. Cambridge University Press. ISBN 978-0-521-56827-2.
- Greenhill 1973:117–120
- Manuel Ravest Mora, "La Casa Gibbs y el Monopolio Salitrero Peruano, 1876-1878", Historia N°41, vol. I, enero-junio 2008: 63-77, ISSN 0073-2435
- Greenhill 1973:123–124
- O'Brien 1980:13
- Greenhill 1973:124
- O'Brien 1980:14
- Querejazu 1979, p. 177
- Farcau 2000, p. 40
- Sater 2007, p. 28
- R.Querejazu 1995
- Sater 2007, p. 29
- Sater 2007, p. 32
- Manuel Ravest Mora. La Compañía Salitrera y la Ocupación de Antofagasta 1878-1879. Andres Bello. GGKEY:BNK53LBKGDQ.
- Edmundson, William (2011). The Nitrate King: A Biography of "Colonel" John Thomas North. Palgrave Macmillan. p. 59. ISBN 978-0230112803.
- Barros Arana 1881a, p. 59
- Bulnes 1920, pp. 42
- Bulnes 1920, p. 117:
- I have good news for you. I have diddled the Gringos decreeing the reversion of the nitrate grounds, and they can't take them away from us if they stir up the whole world. I don't think that Chile will intervene in the matter, – but if she declares war on us we count on the aid of Peru, from whom we shall demand compliance with the Secret Treaty. With this object I am going to send Reyes Ortiz to Lima. Now you see I am giving you good news that you will have to thank me for eternally, and as I tell you, the Gringos are completely diddled and the Chileans can only bite and shout."
- Basadre & 1964 Chapter 1, Los tres obstáculos para el éxito de la mediación:
- la condición impuesta por el gobierno peruano en sus instrucciones para que Chile fuese a la desocupación previa del litoral ocupado sin prometer la suspensión del decreto boliviano sobre expropiación de los bienes de la Compañía de Antofagasta o la modificación del impuesto de los 10 centavos
- Farcau 2000, p. 42
- Basadre & 1964 Chapter 1, La declaratoria de guerra de Bolivia a Chile como recurso para hacer fracasar a Lavalle:
- La versión chilena fue que Bolivia quiso impedir que Chile se armara. En realidad, Daza buscó la forma de malograr la misión Lavalle.
- Bulnes 1920
- Jefferson Dennis 1927, pp. 79–80
- Farcau 2000, p. 65
- As the earlier discussion of the geography of the Atacama region illustrates, control of the sea lanes along the coast would be absolutely vital to the success of a land campaign there
- Farcau 2000, p. 57
- Sater 2007, p. 102 and ff
- "... to anyone willing to sail under Bolivia's colors ..."
- Sater 2007, p. 119
- Sater 2007, p. 137
- Robert N. Burr (1967). By Reason Or Force: Chile and the Balancing of Power in South America, 1830-1905. University of California Press. pp. 145–146. ISBN 978-0-520-02629-2.
- Lawrence A. Clayton (1985). Grace: W.R. Grace & Co., the Formative Years, 1850-1930. Lawrence Clayton. ISBN 978-0-915463-25-1.
- Farcau 2000, p. 214
- Sater 2007, pp. 151–152
- Sater 2007:150
- Sater 2007, pp. 113–114
- "There are numerous differences of opinion as to the ships' speed and armament. Some of these differences can be attributed to the fact that the various sources may have been evaluating the ships at different times."
- Basadre & 1964 Chapter 2, "El combate de Angamos"
- Sater 2007, p. 296
- Sater 2007, pp. 171–172
- Sater 2007, pp. 204–205
- Farcau 2000, p. 119
- Sater 2007, p. 181
- Farcau 2000, p. 120"
- Farcau 2000, p. 120
- Farcau 2000, p. 121
- "Piérola ... mounted an assault on the Palace but ... leaving more than three hundred corpses ..."
- Sater 2007, p. 208
- Farcau 2000, p. 130
- Sater 2007, p. 217
- Sater 2007, p. 222
- "Baquedano could not simply bypass the Peruvian troops, whose presence threatened Moquegua as well as the communications network extending southeast across the Locumba Valley to Tacna and northwest to Arequipa and northeast to Bolivia"
- Farcau 2000, p. 138 specifies 3,100 men in Arequipa, 2,000 men in Arica and 9,000 men in Tacna, but this figures contradict the total numbers given (below) by William F. Sater in page 229
- Farcau 2000, p. 138
- "...it became evident that there was a total lack of the necessary transport for even the minimum amount of supplies and water"
- Sater 2007, p. 227
- "The allied force, he [Campero] concluded lacked sufficient transport to move into the field its artillery as well as its rations and, more significantly, its supplies of water"
- Sater 2007, p. 229
- Sater 2007, p. 256
- Farcau 2000, p. 1147
- Farcau 2000, p. 152
- "Lynch's force consisted of the 1° Line Regiment and the Regiments "Talca" and "Colchagua", a battery of mountain howitzers, and a small cavalry squadron for a total of twenty-two hundred man"
- Barros Arana 1881b, p. 98
- "[The Chilean government thought that it was possible to demonstrate to the enemy the futility of any defense of Peruvian territory not only against the whole [Chilean] army but also against small [Chilean] divisions. That was the purpose of the expedition, which the claims, insults, and affliction in the official documents of Peru and in the press had made famous"
- (Original: "[El gobierno chileno] Creía entonces que todavía era posible demostrar prácticamente al enemigo la imposibilidad en que se hallaba para defender el territorio peruano no ya contra un ejército numeroso sino contra pequeñas divisiones. Este fué el objeto de una espedicion que las quejas, los insultos i las lamentaciones de los documentos oficiales del Perú, i de los escritos de su prensa, han hecho famosa.")
- Basadre 1964, p. 2475
- Sater 2007, p. 260
- Barros Arana 1881a:
- "Bluntschili (Derecho internacional codificado) dice espresamente lo que sigue: Árt. 544. Cuando el enemigo ha tomado posesión efectiva de una parte del territorio, el gobierno del otro estado deja de ejercer alli el poder. Los habitantes del territorio ocupado están eximidos de todos los deberes i obligaciones respecto del gobierno anterior, i están obligados a obedecer a los jefes del ejército de ocupación."
- Johann Kaspar Bluntschli (1870). Le droit international codifié. Guillaumin et Cie. pp. 290–.
- Villalobos 2004, p. 176
- Farcau 2000, p. 153
- Farcau 2000, pp. 149–150
- Sater 2007, p. 258
- Farcau 2000, p. 157
- Sater 2007, pp. 258–259
- Sater 2007, p. 276
- Sater 1986, p. 274
- Sater 1986, p. 180
- Carlos Escudé y Andrés Cisneros, Historia de las Relaciones Exteriores Argentinas, El tratado del 23 de julio de 1881, retrieved on 18 December 2014, archiveurl="https://web.archive.org/web/20131202225428/http://www.argentina-rree.com/6/6-088.htm"
- Diario El Mercurio, del Domingo 28 de abril de 2002 en archive.org
- Sater 2007, p. 302
- Sater 2007, p. 301
- Sater 2007, p. 303
- Sater 2007, pp. 301–302
- Sater 2007, p. 313
- Sater 2007, p. 300
- Sater 2007, p. 309
- Sater 2007, p. 312
- Sater 2007, p. 315
- Sater 2007, p. 329
- Congreso del Perú, Grito de Montán, retrieved on 24 March 2005]
- Sater 2007, pp. 329–330
- Farcau 2000, pp. 181–182
- Sater 2007, pp. 317–338
- Farcau 2000, pp. 183–187
- Folia Dermatológica Peruana, Vol. 10 • Nº. 1 Marzo de 1999. Foto en Imágenes de la Enfermedad de Carrión por Uriel García Cáceres y Fernando Uriel García V.
- Sater 2007, p. 340ff
- Sater 2007, p. 340
- Florencia E. Mallon (14 July 2014). The Defense of Community in Peru's Central Highlands: Peasant Struggle and Capitalist Transition, 1860-1940. Princeton University Press. p. 101. ISBN 978-1-4008-5604-6.
- Barros, Mario (1970). Historia diplomática de Chile, 1541-1938. Andres Bello. GGKEY:7T4TB12B4GQ.
- English 1985, p. 372
- Scheina 2003, p. 377
- Farcau 2000, p. 48
- English 1985, p. 75
- Stanislav Andreski Wars, revolutions, dictatorships: studies of historical and contemporary problems from a comparative viewpoint page 105:
- (...) Chile's army and fleet were better equipped, organized and commanded(...)
- Helen Miller Bailey, Abraham Phineas Nasatir Latin America: the development of its civilization page 492:
- Chile was a much more modernized nation with better-trained and better-equipped
- Scheina 2003, pp. 376–377
- Sater 2007, p. 20
- Farcau 2000, p. 159
- Dorothea, Martin. "Chinese Migration into Latin America – Diaspora or Sojourns in Peru?" (pdf). Appalachian State University. p. 10. Retrieved September 25, 2011.
- The Ambiguous Relationship: Theodore Roosevelt and Alfred Thayer Mahan by Richard W. Turk; Greenwood Press, 1987. 183 pgs. page 10
- Larrie D. Ferreiro 'Mahan and the "English Club" of Lima, Peru: The Genesis of The Influence of Sea Power upon History', The Journal of Military History – Volume 72, Number 3, July 2008, pp. 901–906
- John A. Britton (30 December 2013). Cables, Crises, and the Press: The Geopolitics of the New Information System in the Americas, 1866-1903. UNM Press. ISBN 978-0-8263-5398-6.:71-
- Augusto Pinochet Ugarte. La Guerra Del Pacífico. Andres Bello. pp. 20–. GGKEY:TLF0S8WSFAA.
- Mauricio Pelayo González (6 January 2015). "Combate Naval de Antofagasta". www.laguerradelpacifico.cl. www.laguerradelpacifico.cl. Retrieved 6 January 2015.
- Querejazu 1979, p. 230
- R.Querejazu 1995 Cap XXXI Que se rinda su abuela carajo!
- Website "Soberaniachile.cl", http://www.soberaniachile.cl/mitos_sobre_los_trofeos_de_guerra_peruanos_traidos_a_chile.html , retrieved on 16 December 2014
- Sater 2007, p. 90
- Francisco Antonio Encina, "Historia de Chile", page 8, cited in Valentina Verbal Stockmeyer, "El Ejército de Chile en vísperas de la Guerra del Pacífico", Historia 396 ISSN 0719-0719 N°1 2014 [135-165], page 160
- Villalobos 2004, p. 160
- Villalobos 2004, p. 162
- Villalobos 2004, p. 167
- Hugo Pereira, Una revisión histográfica de la ejecución del guerrillero Tomás Laymes, in Trabajos sobre la Guerra del Pacífico, Pontificia Universidad Católica del Perú. page 269 and ff.
- Oliver García Meza, Los chinos en la Guerra del Pacífico, Revista Marina, retrieved on 12 November 2013
- Farcau 2000, p. 160,165
- Ramon Aranda de los Rios, Carmela Sotomayor Roggero, Una sublevación negra en Chincha: 1879, pages 238 & ff in "La Guerra del Pacífico", Volumen 1, Wilson Reategui, Wilfredo Kapsoli & others, Universidad Nacional Mayor de San Marcos, Lima, 1979
- Wilfredo Kapsoli, El Peru en una coyuntura de crisis, 1879-1883, pages 35-36 in "La Guerra del Pacífico", Volumen 1, Wilson Reategui, Wilfredo Kapsoli & others, Universidad Nacional Mayor de San Marcos, Lima, 1979
- Sater 2007, pp. 324
- Farcau 2000, p. 149
- Kiernan 1955, p. 18
- Rubilar Luengo, Mauricio E. (2004), "Guerra y diplomacia: las relaciones chileno-colombianas durante la guerra y postguerra del Pacífico (1879–1886)", Revista Universum (in Spanish) 19 (1): 148–175, doi:10.4067/s0718-23762004000100009
- Kenneth Duane Lehman (1999). Bolivia and the United States: A Limited Partnership. University of Georgia Press. ISBN 978-0-8203-2116-5.
- Sater 2007, pp. 304–306
- "The anglophobic secretary of state ..."
- Basadre 1964
- Sater 2007, pp. 304–306
- Basadre 1964
- Rafael Mellafe Maturana, "La ayuda inglesa a Chile durante la Guerra del Pacífico. ¿Mito o realidad?" en Cuaderno de historia militar, nr. 12, diciembre de 2012, Departamento de historia militar del Ejército de Chile, página 69
- GODOY ORELLANA, Milton. "HA TRAÍDO HASTA NOSOTROS DESDE TERRITORIO ENEMIGO, EL ALUD DE LA GUERRA: CONFISCACIÓN DE MAQUINARIAS Y APROPIACIÓN DE BIENES CULTURALES DURANTE LA OCUPACIÓN DE LIMA, 1881-1883". Historia (Santiago) [online]. 2011, vol.44, n.2 [citado 2014-10-26], pp. 287-327 . Disponible en: <http://www.scielo.cl/scielo.php?script=sci_arttext&pid=S0717-71942011000200002&lng=es&nrm=iso>. ISSN 0717-7194.
- Cunning, Andrera (2003). "Safeguarding of Cultural Property in Times of War & (and) Peace, The" 11 (1 Article 6). Tulsa Journal of Comparative and International Law: 214.
- Andrea Gattini, Restitution by Russia of Works of Art Removed from German Territory at the End of the Second World War, http://www.ejil.org/pdfs/7/1/1356.pdf , page 70
- Villalobos 2004, p. 230
- Collyns, Dan (November 7, 2007). "Chile returns looted Peru books". BBC. Retrieved 2007-11-10.
- Villalobos 2004, p. 233
- Villalobos 2004, pp. 259–262
- Farcau 2000, p. 191
- filmaffinity web site, retrieved on 15 April 2015
- Páginas Heróicas
- Barros Arana, Diego (1881a). Historia de la guerra del Pacífico (1879–1880) (History of the War of the Pacific (1879–1880)) (in Spanish) 1. Santiago, Chile: Librería Central de Servat i Ca.
- Barros Arana, Diego (1881b). Historia de la guerra del Pacífico (1879–1880) (History of the War of the Pacific (1879–1880)) (in Spanish) 2. Santiago, Chile: Librería Central de Servat i Ca.
- Basadre, Jorge (1964). Historia de la Republica del Peru, La guerra con Chile (in Spanish). Lima, Peru: Peruamerica S.A.,. Archived from the original on 11 December 2007.
- Bulnes, Gonzalo (1920). Chile and Peru: the causes of the war of 1879. Santiago, Chile: Imprenta Universitaria.
- Chilean government (1879–1881). Boletín de la Guerra del Pacífico (Bulletin of the War of the Pacific) (in Spanish). Santiago, Chile: Editorial Andres Bello.
- Villalobos, Sergio (2004). Chile y Perú, la historia que nos une y nos separa, 1535-1883 (in Spanish) (2nd ed.). Chile: Editorial Universitaria. ISBN 9789561116016.
- De Varigny, Charles (1922). La Guerra del Pacífico (in Spanish) 1. Santiago de Chile: Imprenta Cervantes.
- English, Adrian J. (1985). Armed forces of Latin America: their histories, development, present strength, and military potential. Jane's Information Group, Incorporated. ISBN 978-0-7106-0321-0.
- Farcau, Bruce W. (2000). The Ten Cents War, Chile, Peru and Bolivia in the War of the Pacific, 1879–1884. Westport, Connecticut, London: Praeger Publishers. ISBN 978-0-275-96925-7. Retrieved January 17, 2010.
- Jefferson Dennis, William (1927). "Documentary history of the Tacna-Arica dispute from University of Iowa studies in the social sciences" 8. Iowa: University Iowa City.
- Paz Soldan, Mariano Felipe (1884). Narracion Historica de la Guerra de Chile contra Peru y Bolivia (Historical narration of the Chile's War against Peru and Bolivia) (in Spanish). Buenos Aires, Argentina: Imprenta y Libreria de Mayo, calle Peru 115.
- Sater, William F. (2007). Andean Tragedy: Fighting the War of the Pacific, 1879–1884. Lincoln and London: University of Nebraska Press. ISBN 978-0-8032-4334-7.
- Sater, William F. (1986). Chile and the War of the Pacific. Lincoln and London: University of Nebraska Press. ISBN 978-0-8032-4155-8.
- Sater, William F. (1973). "Chile During the First Months of the War of the Pacific" 5, 1 (pages 133-138 ed.). Cambridge at the University Press: Journal of Latin American Studies.
- Scheina, Robert L. (2003). Latin America's Wars: The age of the caudillo, 1791–1899. Potomac Books, Inc. ISBN 978-1-57488-450-0.
- O'Brien, Thomas F. (1980). "The Antofagasta Company: A Case Study of Peripheral Capitalism". Duke University Press: Hispanic American Historical Review.
- Querejazu Calvo, Roberto (1979). Guano, Salitre y Sangre. La Paz-Cochabamba, Bolivia: Editorial los amigos del Libro.
- Querejazu Calvo, Roberto (1995). Aclaraciones históricas sobre la Guerra del Pacífico. La Paz, Bolivia: Editorial los amigos del Libro.
- Kiernan, Victor (1955). "Foreign Interests in the War of the Pacific" XXXV (pages 14-36 ed.). Duke University Press: Hispanic American Historical Review.
- Yrigoyen, Pedro (1921). "La alianza perú-boliviano-argentina y la declaratoria de guerra de Chile". Lima: San Marti & Cía. Impresores.
- Chilean caricatures during the war in Tesis of Patricio Ibarra Cifuentes, Universidad de Chile, 2009.
- Article Caliche: the conflict mineral that fuelled the first world war in The Guardian by Daniel A. Gross, 2 June 2014. | https://en.wikipedia.org/wiki/War_of_the_Pacific |
4 | ||This article may be in need of reorganization to comply with Wikipedia's layout guidelines. (March 2011)|
In music, an altered chord, an example of alteration (see below), is a chord with one or more diatonic notes replaced by, or altered to, a neighboring pitch in the chromatic scale. For example, the chord progression on the left uses four unaltered chords:
The progression on the right uses an altered IV chord and is an alteration of the previous progression. The A♭ in the altered chord serves as a leading tone to G, which is the root of the next chord.
In jazz and jazz harmony, the term altered chord, notated as an alt chord (e.g. G7alt Play (help·info)), refers to a dominant chord, "in which neither the fifth nor the ninth appears unaltered". – namely, where the 5th and the 9th are raised or lowered by a single semitone, or omitted. Altered chords are thus constructed using the following notes, some of which may be omitted:
- ♭5 and/or ♯5
- ♭9 and/or ♯9
Altered chords may include both a flatted and sharped form of the altered fifth or ninth, e.g. G7♭5♯5♭9; however, it is more common to use only one such alteration per tone, e.g. G7♭5♭9, G7♭5♯9, G7♯5♭9, or G7♯5♯9.
The choice of inversion, or the omission of certain tones within the chord (e.g. omitting the root, common in guitar harmony), can lead to many different possible colorings, substitutions, and enharmonic equivalents. Altered chords are ambiguous harmonically, and may play a variety of roles, depending on such factors as voicing, modulation, and voice leading.
The altered chord's harmony is built on the altered scale, which includes all the alterations shown in the chord elements above:
- ♭9 (=♭2)
- ♯9 (=♯2 or ♭3)
- ♯11 (=♯4 or ♭5)
- ♭13 (=♯5)
Because they don't have natural fifths, 7alt chords support tritone substitution (♭5 substitution). Thus the 7alt chord on a given root can be substituted with the 13♯11 chord on the root a tritone away (e.g., G7alt is the same as D♭13♯11 Play (help·info)).
Altered chords are commonly substituted for regular dominant V chords on the same root in ii-V-I progressions, most commonly in minor harmony leading to an i7 (tonic minor 7th) chord.
More generally in jazz, the terms altered chord and altered tone also refer to the family of chords that involve ♭9 and ♭5 voicing, as well as to certain other chords with related ambiguous harmony. Thus the "7♭9 chord" (e.g. G7♭9) is used in the context of a dominant resolution to a major tonic, which is typically voiced with a ♮13 rather than the ♭13 of the alt chord. When voiced with a ♮13, jazz musicians typically play the half-step/whole-step diminished scale over the ♭9 chord (e.g. G, A♭, B♭, B, C♯, D, E, F over G7♭9).
Note that in chord substitution and comping, a 7♭9 is often used to replace a diminished chord, for which it may be the more "correct" substitution due to its incorporation of an appropriate root tone. Thus, in a progression where a diminished chord is written in place of a G7 chord, i.e. where the dominant chord is replaced by an A♭-dim (A♭-C♭-E = G♯-B-D), D-dim (D-F-A♭), B-dim (B-D-F), or F-dim (F-A♭-C♭ = F-G♯-B)), a G7♭9 is often played instead. G7♭9 (G-B-D-F-♭A) contains the same notes as any of these diminished chords with an added G root.
|Look up alteration in Wiktionary, the free dictionary.|
In music, alteration, an example of chromaticism, is the use of a neighboring pitch in the chromatic scale in place of its diatonic neighbor such as in an altered chord. This should not be confused with borrowing (as in borrowed chord), in which pitches or chords from the parallel key are used in place of those of the original key. Altered notes may be used as leading tones to emphasize their diatonic neighbors. Contrast with chord extension: "Whereas chord extension generally involves adding notes that are logically implied, chord alteration involves changing some of the typical notes. This is usually done on dominant chords, and the four alterations that are commonly used are the ♭5, ♯5, ♭9 and ♯9. Using one (or more) of these notes in a resolving dominant chord greatly increases the bite in the chord and therefore the power of the resolution." "The more tension, the more powerful the resolution...we can pile that tension on to make the resolution really spectacular."
The ♭9 chord is recommended for resolution to minor chords, for example VI7 to ii (G7♭9 to Cm7) in the I-vi-ii-V turnaround. The ♯9 chord is also known as the Purple Haze chord, is most often notated with the enharmonic equivalent ♭3, and is thus used with the blues. The 5 in a ♭5 chord is enharmonically equivalent to a ♯4 or ♯11, but the sharp eleventh chord includes the ♮5 while in the flat fifth chord it is replaced. The ♯5 chord is enharmonically equivalent to a ♭13, does not include the ♮5, and is more common than the ♭13 chord. Both the flat and sharp fifth resolve nicely to the natural ninth.
In jazz, chromatic alteration is either the addition of "notes which are not diatonic to the given scale" or "the expansion of any given [chord] progression by adding extra nondiatonic chords". For example, "A C major scale with an added D♯ note, for instance, is a chromatically altered scale" while, "one bar of Cmaj7 moving to Fmaj7 in the next bar can be chromatically altered by adding the ii and V of Fmaj7 on the second two beats of bar" one. Techniques include the ii-V-I turnaround, as well as movement by half-step or minor third.
For example, an altered dominant or V chord may be G♭-B-D♯ (♭5 and ♯9).
Altered seventh chord
An altered seventh chord is a seventh chord with one, or all, of its factors raised or lowered by a semitone (altered), for example the augmented seventh chord (7+ or 7+5) featuring a raised fifth (C7+5: CEG♯B♭). Most likely the fifth, then the ninth, then the thirteenth.
In classical music, the raised fifth is more common than the lowered fifth, which in a dominant chord adds Phrygian flavor through the introduction of ♭. (for example, in C the dominant is G, its fifth is D, the second scale degree)
- Erickson, Robert (1957). The Structure of Music: A Listener's Guide, p.86. New York: Noonday Press. ISBN 0-8371-8519-X (1977 edition).
- Coker, Jerry (1997). Elements of the Jazz Language for the Developing Improvisor, p.81. ISBN 1-57623-875-X.
- Sher (ed.). The New Real Book Volume Two. Sher Music Co., 1991, ISBN 0-9614701-7-8
- Erickson (1957), p.86. Subtitled "a study of music in terms of melody and counterpoint".
- Baerman, Noah (1998). Complete Jazz Keyboard Method: Intermediate Jazz Keyboard, p.70. ISBN 0-88284-911-5.
- Baerman (1998), p.71.
- Arkin, Eddie (2004). Creative Chord Substitution for Jazz Guitar, p.42. ISBN 0-7579-2301-1.
- Arkin (2004), p.43.
- Aldwell, Edward; Schachter, Carl; and Cadwallader, Allen (2010). Harmony & Voice Leading, p.601. ISBN 9780495189756.
- Davis, Kenneth (2006). The Piano Professor Easy Piano Study, p.78. ISBN 9781430303343.
- Christiansen, Mike (2004). Mel Bay's Complete Jazz Guitar Method, Volume 1, p.45. ISBN 9780786632633. | https://en.wikipedia.org/wiki/Alteration |
4.0625 | Please note, the following is an extract of the full article. To access the full PDF, please see the links provided at the bottom of this page.
The use of teaching strategies can play a vital role as to whether pupils achieved the set learning objectives within a lesson. Teaching strategies are specific methods used to teach specific outcomes to specific groups (Whitehead and Zwozdiak-Myers 2004). There are many different types of teaching strategies that have been derived using different assessment criteria or concepts.
The first set of teaching strategies under discussion, are those derived by Mosston and Ashworth (2002). Mosston and Ashworth (2002) considered the amount of decision making made by the pupil and the teacher when classifying their teaching styles (Mosston and Ashworth (2002) refer to teaching strategies as teaching styles, therefore any reference to teaching styles made are to be considered as teaching strategies).
The styles were primarily divided into two clusters the reproduction cluster (A-E) and the production cluster (F-K). The complete spectrum can be found in appendix A. As the styles progress through the alphabet the decision making made by the teacher in the pre-impact, impact and post-impact set becomes decreasingly prominent, and more decision making is made by the learner. For example Command style A, the teacher makes the entire decisions, as compared with that of the Self-teaching style K, where the learner makes the entire decisions.
Mosston and Ashworth (2002) have a comprehensive range of styles to choose from when considering how to deliver a lesson or lesson episode. One is left asking the question ‘how do I choose which style to use?’, and the answer is quite simple; consider the learning objective(s) of the episode or lesson and decide which style would most effectively achieve the learning outcome. This should not only be the sole deciding criteria but also the pupils’ ability and suitability to this style, safety, and learning styles should also be considered. For example given the learning objective, ‘pupils will devise a gymnastic routine that demonstrates flight, rotation and pairs balances demonstrating good body management’, an appropriate teaching strategy to use would be that of the divergent discovery style, as pupils are given a set criteria of which they have to use, add, and adapt in order to devise their own routine, this style facilitates an adequate amount of decision making to be made by the pupil, without compromising safety or learning. An inappropriate strategy would be that of the command style as all of the decisions are made by the teacher which leaves no room for creativity by the learner, which is essential if pupils are to devise their own gymnastics routine.
Differentiation can also be achieved by selecting the appropriate teaching styles for specific classes, groups or individuals. For example, some pupils may be adequately challenged when performing the high jump using the reciprocal teaching style, others may be able to use the self check style as they are able to kinaesthetically asses their performance an better it. The lesson plans in appendix C and D are examples of how these strategies may be implemented within lessons. Combinations of appropriate strategies have been used in order to support the achievement of the learning objectives.
Question and answer is another strategy whereby pupils may be asked to provide the answer to a question, or the question that relates to a given answer. This strategy may be implemented during instructional or feedback stages or applied continually throughout a lesson or episode. The questions may be differentiated in order to challenge more able pupils using open questioning; and the use of closed questioning may also be used, for those less able as not to discourage them from answering questions. Questions may also be used in reverse, where the answer is given to the pupils and they have to state the question in which it relates to, if this is to be used a question must be given first and then they give the answer in order to the pupils to understand the topic in question. For example, the first question posed by the teacher may be which muscle allows flexion at the knee?, and the answer given by the pupil(s) would be the hamstring, this reversed could be, the teacher saying bicep and the pupils respond with the question which muscle allows flexion of the elbow?. This is another method by which the use of question and answer may be varied and differentiated in order to be tailored to the needs of the individual pupils, ensuring that they achieve their full learning potential.
Group work is another strategy by which pupils are given an opportunity to complete tasks and learning objectives, collaborating with other pupils. Group work enables pupils to work on social interaction skills, such as communication, teamwork and leadership. Group work may be differentiated by adding or reducing the amount of people in the group, assigning different tasks to different groups, and extending the task if a group completes it ahead of time, to name but a few. This is a great strategy in order to develop group interaction skills, however if overused it may alienate the pupils of and introverted nature, as the continual pressure to interact with others may have a negative effect on their motivation.
The teaching games for understanding (TGFU) approach; (Bunker and Thorpe 1982) is another strategy that may be implemented by the teacher in order to achieve set learning outcomes. Bunker and Thorpe (1982) devised the TGFU approach, in order to combat the traditional method of teaching specifically motor responses in a technical sense, as it was deemed that it was no longer appropriate as it focused on the content and not the pupil, and therefore, could be demoralising as it is focussed on the outcome rather than the process. TGFU encourages pupil’s to make their own decisions and devise concepts, and tactics themselves, fostering curiosity and interest. This enables active engagement in learning and hence acts as a motivation tool as suggested by Capel (2000).
The TGFU approach facilitates connections between the individual skills and their application in the game scenario. It enables the teaching of the ‘why’ of the game before the ‘how’ of the skill (Almond, Bunker, and Thorpe 1986). This approach will also enable a positive learning environment to be established, whilst not only maximising participation and achievement of success, but pupil enjoyment also (Capel, Whitehead and Zwozdiak-Myers 2005).
The TGFU approach could also be used as a vehicle by which an emphasis on health may be integrated into lessons. Health education is defined as any activity designed to achieve health-related learning. Effective health education can produce changes in knowledge and understanding, influence values and attitudes, facilitate the acquisition of skills and affect lifestyle changes, as stated by Cale and Harris (2005). This could not only encourage pupils to achieve their full potential during education but also post education ensuring that physical activity is maintained after full time education.
Some of the major motives for people to participate in sport are to improve skills, to have fun, to achieve success and to develop fitness as suggested by Weinberg and Gould (2003). By using the TGFU approach these motives can be utilised by maximising time playing games, thus appealing to the ‘having fun’ and ‘developing fitness’ elements.
Differentiation can also be incorporated in the TGFU approach by overloading games i.e. 3 vs. 2, extending the tasks incorporating more complex skills, and the level of questioning used i.e. open or closed.
Adopting a ‘Target Structure’ (Ames 1992) within lessons is another strategy that may be implemented by physical educators in order to create a positive motivational climate with a mastery focus. And explanation of the target structure can be found in appendix B. A resent study concluded that a mastery climate along with achievable goals corresponded positively with that of intrinsic motivation and satisfaction experienced by pupils (Papaioannou, Tsigilis, Kosmidou, and Milosis 2007), this clearly supports the notion of active engagement whereby learning is maximised when pupils are appropriately challenged, interested and engaged with their learning (DfES 2004). Differentiation is relatively easy whilst using the target structure as they task, time, grouping may be modifies as well as the level of evaluation taking place.
It is important to vary the teaching strategies used within lessons, units and schemes of work in order to appeal to kinaesthetic, visual and auditory learning styles. Without this variation it would be very easy to overlook one of these learning styles which would event in some learners not achieving their full learning potential. If the teaching styles used are not varied it could also mean the loss of motivation and enthusiasm experienced by pupils as the delivery is monotonous and tedious. Most importantly the teaching strategy must be appropriate when considering the learning objective of the episode otherwise pupils will not achieve the learning object as efficiently and thoroughly as possible. Teaching strategies must be selected in line with the learning objective, as the learning objective can not be achieved without the appropriate strategy (Whitehead and Zwozdiak-Myers 2004).
The teaching strategy use is just a component that could facilitate learning, teaching approach and style must also be considered as highlighted by Whitehead and Zwozdiak-Myers (2004).
The type of strategy used to delver a desired learning objective is also a method by which one can differentiated, for less able pupils it may be necessary to have more decision making made by the teacher or more guidance by the teacher, for those more able the amount of required involvement by the teacher may be reduced as to encourage pupils to discover the learning objective for themselves. For example, when teaching dance some pupils may be able to compose a piece of choreography with the teacher using the guided discovery style, whereas others may solely rely on the practice style as they are unable to derive totally new material. By combining the two styles within a lesson enables learning to be maximised by both the more advance and less able pupils ensuring that the learning objective is achieved by both through differentiation.
Not only is it important to consider the teaching strategy being employed within the lesson when it comes to addressing differentiation, but also differentiating by task and outcome. Ways in which we can differentiate by task vary considerably, from using different equipment, entirely different practices, the amount of space provided, the level of overload i.e. 3 vs.2, height of net, just to mention a few. When differentiating by task it is important to strike the right balance between pupils being adequately challenged without being over or under challenged. As physical educators we may also wish to differentiate by outcome. Differentiating by outcome must be planned by the teacher in the pre-impact set in order to it to be most effective. Differentiating by outcome may include the depth of knowledge or level of skill achieved with regards to the learning objective. For example, a pupil who is more competent at performing a lay-up in basketball is more likely to be able to apply this skill in a game successfully than a pupil who may not be able to perform the lay-up competently in a skills practice environment. Therefore, when applying he lay-up in a game environment the competent pupil is more like to achieve more in terms of applying this skill in a game environment.
Why is differentiation so important? Through differentiation one can achieve a pupil centred approach to teaching, ensuring inclusion, and active engagement is achieved, as well learners reaching their full educational potential. All of which contributes to the fulfilment of the Every Child Matters agenda (DfES 2007).
The use of ICT can also facilitate a pupil centred approach by appealing to different learning styles. In 2002 the use of ICT was added to the secondary national strategy, as a way of building on perceived strengths of the primary, literacy and numeracy strategies, as well as an attempt to address perceived weaknesses of Key Stage 3 teaching and learning (DfES, 2002). In theory, by increasing the exposure of ICT within both theory and practical lessons, it should in turn increase the teaching and learning standard across the curriculum. That being said, this will only transfer into practice if the ICT resource is used effectively, and indeed facilitates the improvement of teaching and learning within the lesson.
Through utilising the use of ICT within lessons it also encourages the notion of active engagement, whereby it is deemed that pupils learn most effectively when they are interested, involved and appropriately challenged by the task. This is typically when pupils are most engaged with their learning (DfES 2004).
The use of ICT is also another method in which a teacher can vary the type of teaching and learning within their lessons, ensuring a positive learning environment is achieved. This environment will maintain motivation and enthusiasm levels, which without could lead to behavioural and classroom management problems (Capel, Whitehead and Zwozdiak-Myers, 2004). The gradual exposure to ICT is also another method by which the youth of today can become acclimatised to using ICT, in a world where its presents and necessity are ever increasing.
In summary, by selecting appropriate teaching strategies to achieve intended learning objectives, applying differentiation into lessons and incorporating the use of ICT effectively, one can maximise the learning taking place within lessons and in turn contribute to the fulfilment of the Every Child Matters agenda (DFES, 2007) whereby pupils are encouraged to be healthy, be safe, enjoy and achieve, make a positive contribution and achieve economic wellbeing. The teaching strategies chosen must complement the learning objective, whilst insuring pupils are adequately challenged through the use of differentiation and ICT, creating a pupil centred approach to teaching and learning and supporting the notion of active engagement.
- Ames, C. (1992) Classrooms: Goals, structures, and student motivation. Journal of Educational Psychology. 84, pp. 261-271.
- Bunker, D. and Thorpe, R. (1982) A Model for the Teaching of Games in Secondary Schools: Bulletin of Physical Education. 18 (1), pp. 5-8.
- Cale, L. and Harris, J. (2005) Getting the Buggers Fit. London. Continuum International Publishing Group Ltd.
- Capel, S. (2000) Approaches to Teaching Games. In Capel, S. and Piotrowski, S. (Eds) Issues in Physical Education. London. RoutledgeFalmer.
- Capel, S. Whitehead, M. and Zwozdiak-Myers, P. (2004) Developing and Maintaining an Effective Learning Environment. In Capel, S. (Eds) Learning to Teach Physical Education in Secondary School. Second Edition. U.K, RoutledgeFalmer.
- DfES (2007) Every Child Matters – Change for Children. Children and Young People. http://www.everychildmatters.gov.uk/children/ [accessed 1st December 2008].
- DfES (2002) Key Stage 3 National Strategy: Training Materials for the Foundation Subjects. London, DfES.
- Lockwood, A and Newton, A. (2004) Observation of Pupils in PE. In Capel, S. (Eds) Learning to Teach Physical Education in Secondary School. Second Edition. U.K. RoutledgeFalmer.
- Mosston, M. and Ashworth, S. (2002) Teaching Physical Education. Fifth Edition. San Francisco, Benjamin Cummings.
- Papaioannou, N. Tsigilis, P. Kosmidou, E. and Milosis, D. (2007) The Journal of Teaching Physical Education: Measuring Motivational Climate in Physical Education. 26, pp. 236.
- Whitehead, M. and Zwozdiak-Myers, P. (2004) Designing Teaching Approaches to Achieve Intended Learning Outcomes. In Capel, S. (Eds) Learning to Teach Physical Education in Secondary School. Second Edition. U.K. RoutledgeFalmer. | https://www.pescholar.com/resource/phase/whole-school/1879/teaching-strategies-differentiation-and-ict/ |
4.03125 | Image of sarcomere
A sarcomere (Greek sarx "flesh", meros "part") is the basic unit of striated muscle tissue. Skeletal muscles are composed of tubular muscle cells (myocytes called muscle fibers) which are formed in a process known as myogenesis. Muscle fibers are composed of tubular myofibrils. Myofibrils are composed of repeating sections of sarcomeres, which appear under the microscope as dark and light bands. Sarcomeres are composed of long, fibrous proteins as filaments that slide past each other when a muscle contracts or relaxes.
Two of the important proteins are myosin, which forms the thick filament, and actin, which forms the thin filament. Myosin has a long, fibrous tail and a globular head, which binds to actin. The myosin head also binds to ATP, which is the source of energy for muscle movement. Myosin can only bind to actin when the binding sites on actin are exposed by calcium ions.
Actin molecules are bound to the Z line, which forms the borders of the sarcomere. Other bands appear when the sarcomere is relaxed.
- A sarcomere is defined as the segment between two neighbouring Z-lines (or Z-discs, or Z bodies). In electron micrographs of cross-striated muscle, the Z-line (from the German "Zwischenscheibe", the disc in between the I bands) appears as a series of dark lines.
- Surrounding the Z-line is the region of the I-band (for isotropic). I-band is the zone of thin filaments that is not superimposed by thick filaments.
- Following the I-band is the A-band (for anisotropic). Named for their properties under a polarizing microscope. An A-band contains the entire length of a single thick filament.
- Within the A-band is a paler region called the H-zone (from the German "heller", brighter). Named for their lighter appearance under a polarization microscope. H-band is the zone of the thick filaments that is not superimposed by the thin filaments.
- Within the H-zone is a thin M-line (from the German "Mittelscheibe", the disc in the middle of the sarcomere) formed of cross-connecting elements of the cytoskeleton.
The relationship between the proteins and the regions of the sarcomere are as follows:
- Actin filaments, the thin filaments, are the major component of the I-band and extend into the A-band.
- Myosin filaments, the thick filaments, are bipolar and extend throughout the A-band. They are cross-linked at the centre by the M-band.
- The giant protein titin (connectin) extends from the Z-line of the sarcomere, where it binds to the thick filament (myosin) system, to the M-band, where it is thought to interact with the thick filaments. Titin (and its splice isoforms) is the biggest single highly elasticated protein found in nature. It provides binding sites for numerous proteins and is thought to play an important role as sarcomeric ruler and as blueprint for the assembly of the sarcomere.
- Another giant protein, nebulin, is hypothesised to extend along the thin filaments and the entire I-Band. Similar to titin, it is thought to act as a molecular ruler along for thin filament assembly.
- Several proteins important for the stability of the sarcomeric structure are found in the Z-line as well as in the M-band of the sarcomere.
- Actin filaments and titin molecules are cross-linked in the Z-disc via the Z-line protein alpha-actinin.
- The M-band proteins myomesin as well as C-protein crosslink the thick filament system (myosins) and the M-band part of titin (the elastic filaments).
- The interaction between actin and myosin filaments in the A-band of the sarcomere is responsible for the muscle contraction (sliding filament model).
Upon muscle contraction, the A-bands do not change their length (1.85 micrometer in mammalian skeletal muscle), whereas the I-bands and the H-zone shorten. This causes the Z lines to come closer together.
The protein tropomyosin covers the myosin binding sites of the actin molecules in the muscle cell. To allow the muscle cell to contract, tropomyosin must be moved to uncover the binding sites on the actin. Calcium ions bind with troponin-C molecules (which are dispersed throughout the tropomyosin protein) and alter the structure of the tropomyosin, forcing it to reveal the cross-bridge binding site on the actin.
The concentration of calcium within muscle cells is controlled by the sarcoplasmic reticulum, a unique form of endoplasmic reticulum in the sarcoplasm. Muscle contraction ends when calcium ions are pumped back into the sarcoplasmic reticulum, allowing the contractile apparatus and, thus, muscle cell to relax.
During stimulation of the muscle cell, the motor neuron releases the neurotransmitter acetylcholine, which travels across the neuromuscular junction (the synapse between the terminal bouton of the neuron and the muscle cell). Acetylcholine binds to a post-synaptic nicotinic acetylcholine receptor. A change in the receptor conformation allows an influx of sodium ions and initiation of a post-synaptic action potential. The action potential then travels along T (transverse) tubules until it reaches the sarcoplasmic reticulum. Here, the depolarized membrane activates voltage-gated L-type calcium channels, present in the plasma membrane. The L-type calcium channels are in close association with ryanodine receptors present on the sarcoplasmic reticulum. The inward flow of calcium from the L-type calcium channels activate ryanodine receptors to release calcium ions from the sarcoplasmic reticulum. This mechanism is called calcium-induced calcium release (CICR). It is not understood whether the physical opening of the L-type calcium channels or the presence of calcium causes the ryanodine receptors to open. The outflow of calcium allows the myosin heads access to the actin cross-bridge binding sites, permitting muscle contraction.
At rest, the myosin head is bound to an ATP molecule in a low-energy configuration and is unable to access the cross-bridge binding sites on the actin. However, the myosin head can hydrolyze ATP into adenosine diphosphate (ADP) and an inorganic phosphate ion. A portion of the energy released in this reaction changes the shape of the myosin head and promotes it to a high-energy configuration. Through the process of binding to the actin, the myosin head releases ADP and an inorganic phosphate ion, changing its configuration back to one of low energy. The myosin remains attached to actin in a state known as rigor, until a new ATP binds the myosin head. This binding of ATP to myosin releases the actin by cross-bridge dissociation. The ATP-associated myosin is ready for another cycle, beginning with hydrolysis of the ATP.
The A-band is visible as dark transverse lines across myofibers; the I-band is visible as lightly staining transverse lines, and the Z-line is visible as dark lines separating sarcomeres at the light-microscope level.
Most muscle cells store enough ATP for only a small number of muscle contractions. While muscle cells also store glycogen, most of the energy required for contraction is derived from phosphagens. One such phosphagen is creatine phosphate, which is used to provide ADP with a phosphate group for ATP synthesis in vertebrates.
Comparative sarcomere structure
The structure of the sarcomere affects its function in several ways. The overlap of actin and myosin gives rise to the length-tension curve, which shows how sarcomere force output decreases if the muscle is stretched so that fewer cross-bridges can form or compressed until actin filaments interfere with each other. Length of the actin and myosin filaments (taken together as sarcomere length) affects force and velocity - longer sarcomeres have more cross-bridges and thus more force, but have a reduced range of shortening. Vertebrates display a very limited range of sarcomere lengths, with roughly the same optimal length (length at peak length-tension) in all muscles of an individual as well as between species. Arthropods, however, show tremendous variation (over seven-fold) in sarcomere length, both between species and between muscles in a single individual. The reasons for the lack of substantial sarcomere variability in vertebrates is not fully known.
- Reece, Jane; Campbell, Neil (2002). Biology. San Francisco: Benjamin Cummings. ISBN 0-8053-6624-5.
- Assuming that the length of biceps is 20 cm and the length of sarcomere is 2 micrometer, there are 100,000 sarcomeres along the length of biceps.
- Skeletal Muscle Structure, Function, and Plasticity: The Physiological Basis of Rehabilitation. Lieber, 2002. ISBN 978-0781730617
|Wikimedia Commons has media related to Sarcomeres.|
- MBInfo: Sarcomere
- MBInfo: Contractile Fiber
- Muscular Tissues Videos
- Histology image: 21601ooa – Histology Learning System at Boston University - "Ultrastructure of the Cell: sarcoplasm of skeletal muscle"
- MedicalMnemonics.com: 50 379 107
- Images created by antibody to striations
- Muscle Contraction for dummies
- Model representation of the sarcomere | https://en.wikipedia.org/wiki/Sarcomere |
4.34375 | |Colongitude||0° at sunrise|
Shackleton is an impact crater that lies at the south pole of the Moon. The peaks along the crater's rim are exposed to almost continual sunlight, while the interior is perpetually in shadow (a Crater of eternal darkness). The low-temperature interior of this crater functions as a cold trap that may capture and freeze volatiles shed during comet impacts on the Moon. Measurements by the Lunar Prospector spacecraft showed higher than normal amounts of hydrogen within the crater, which may indicate the presence of water ice. The crater is named after Antarctic explorer Ernest Shackleton.
The rotational axis of the Moon lies within Shackleton, only a few kilometers from its center. The crater is 21 km in diameter and 4.2 km deep. From the Earth, it is viewed edge-on in a region of rough, cratered terrain. It is located within the South Pole-Aitken basin on a massif. The rim is slightly raised about the surrounding surface and it has an outer rampart that has been only lightly impacted. No significant craters intersect the rim, and it is sloped about 1.5° toward the direction 50–90° from the Earth. The age of the crater is about 3.6 billion years and it has been in the proximity of the south lunar pole for at least the last two billion years.
Because the orbit of the Moon is tilted only 5° from the ecliptic, the interior of this crater lies in perpetual darkness. Estimates of the area in permanent shadow were obtained from Earth-based radar studies. Peaks along the rim of the crater are almost continually illuminated by sunlight, spending about 80–90% of each lunar orbit exposed to the Sun. Continuously illuminated mountains have been termed peaks of eternal light and have been predicted to exist since the 1900s.
The shadowed portion of the crater was imaged with the Terrain Camera of the Japanese SELENE spacecraft using the illumination of sunlight reflected off the rim. The interior of the crater consists of a symmetrical 30° slope that leads down to a 6.6 km diameter floor. The handful of craters along the interior span no more than a few hundred meters. The bottom is covered by an uneven mound-like feature that is 300 to 400 m thick. The central peak is about 200 m in height.
The continuous shadows in the south polar craters cause the floors of these formations to maintain a temperature that never exceeds about 100 K. For Shackleton, the average temperature was determined to be about 90 K, reaching 88 K at the crater floor. Under these conditions, the estimated rate of loss from any ice in the interior would be 10−26 to 10−27 m/s. Any water vapor that arrives here following a cometary impact on the Moon would lie permanently frozen on or below the surface. However, the surface albedo of the crater floor matches the lunar far-side, suggesting that there is no exposed surface ice.
This crater was named after Ernest Henry Shackleton, an Anglo-Irish explorer of Antarctica from 1901 until his death in 1922. The name was officially adopted by the International Astronomical Union in 1994. Nearby craters of note include Shoemaker, Haworth, de Gerlache, Sverdrup, and Faustini. Somewhat farther away, on the eastern hemisphere of the lunar near side, are the larger craters Amundsen and Scott, named after two other early explorers of the Antarctic continent.
From the perspective of the Earth, this crater lies along the southern limb of the Moon, making observation difficult. Detailed mapping of the polar regions and farside of the Moon did not occur until the advent of orbiting spacecraft. Shackleton lies entirely within the rim of the immense South Pole-Aitken basin, which is one of the largest known impact formations in the Solar System. This basin is over 12 kilometers deep, and an exploration of its properties could provide useful information about the lunar interior.
A neutron spectrometer on board the Lunar Prospector spacecraft detected enhanced concentrations of hydrogen close to the northern and southern lunar poles, including the crater Shackleton. At the end of this mission in July 1999, the spacecraft was crashed into the nearby crater Shoemaker in the hope of detecting from Earth-based telescopes an impact-generated plume containing water vapor. The impact event did not produce any detectable water vapor, and this may be an indication that the hydrogen is not in the form of hydrated minerals, or that the impact site did not contain any ice. Alternatively, it is possible that the crash did not excavate deeply enough into the regolith to liberate significant quantities of water vapor.
From Earth-based radar and spacecraft images of the crater edge, Shackleton appears to be relatively intact; much like a young crater that has not been significantly eroded from subsequent impacts. This may mean that the inner sides are relatively steep, which may make traversing the sides relatively difficult for a robotic vehicle. In addition, it is possible that the interior floor might not have collected a significant quantity of volatiles since its formation. However other craters in the vicinity are considerably older, and may contain significant deposits of hydrogen, possibly in the form of water ice. (See Shoemaker (lunar crater), for example.)
Radar studies preceding and following the Lunar Prospector mission demonstrate that the inner walls of Shackleton are similar in reflective characteristics to those of some sunlit craters. In particular, the surroundings appear to contain a significant number of blocks in its ejecta blanket, suggesting that its radar properties are a result of surface roughness, and not ice deposits, as was previously suggested from a radar experiment involving the Clementine mission. This interpretation, however, is not universally agreed upon within the scientific community. Radar images of the crater at a wavelength of 13 cm show no evidence for water ice deposits.
Optical imaging inside the crater was done for the first time by the Japanese lunar orbiter spacecraft Kaguya in 2007. It did not have any evidence of significant amount of water ice, down to the image resolution of 10 m per pixel.
On November 15, 2008, a 34-kg probe made a hard landing near the crater. The moon impact probe (MIP) was launched from the Indian Chandrayaan-I spacecraft and reached the surface 25 minutes later. The probe carried a radar altimeter, video imaging system, and a mass spectrometer, which will be used to search for water.
Some sites along Shackleton's rim receive almost constant illumination. At these locales sunlight is almost always available for conversion into electricity using solar panels, potentially making them good locations for future Moon landings. The temperature at this site is also more favorable than at more equatorial latitudes as it does not experience the daily temperature extremes of 100 °C when the Sun is overhead, to as low as −150 °C during the lunar night.
While scientific experiments performed by Clementine and Lunar Prospector could indicate the presence of water in the polar craters, the current evidence is far from definitive. There are doubts among scientists as to whether or not the hydrogen is in the form of ice, as well as to the concentration of this "ore" with depth below the surface. Resolution of this issue will require future missions to the Moon. The presence of water suggests that the crater floor could potentially be "mined" for deposits of hydrogen in water form, a commodity that is expensive to deliver directly from the Earth.
This crater has also been proposed as a future site for a large infrared telescope. The low temperature of the crater floor makes it ideal for infrared observations, and solar cells placed along the rim could provide near-continuous power to the observatory. About 120 kilometers from the crater lies the 5-km tall Malapert Mountain, a peak that is perpetually visible from the Earth, and which could serve as a radio relay station when suitably equipped.
NASA has named the rim of Shackleton as a potential candidate for its lunar outpost, slated to be up and running by 2020 and continuously staffed by a crew by 2024. The location would promote self-sustainability for lunar residents, as perpetual sunlight on the south pole would provide energy for solar panels. Furthermore, the shadowed polar regions are believed to contain the frozen water necessary for human consumption and could also be harvested for fuel manufacture.
- Haruyama, Junichi; Ohtake, M.; Matsunaga, T.; Morota, T.; Honda, C.; Yokota, Y.; Pieters, C. M.; Hara, S.; et al. (November 7, 2008). "Lack of Exposed Ice Inside Lunar South Pole Shackleton Crater". Science 322 (5903): 938–939. Bibcode:2008Sci...322..938H. doi:10.1126/science.1164020. PMID 18948501.
- Spudis, Paul D.; Bussey, Ben; Plescia, Jeffrey; Josset, Jean-Luc; Beauvivre, Stéphane (2008). "Geology of Shackleton Crater and the south pole of the Moon" (PDF). Geophysical Review Letters 35 (14): L14201. Bibcode:2008GeoRL..3514201S. doi:10.1029/2008GL034468. Retrieved 2009-09-24.
- "SMART-1 view of Shackleton at lunar South Pole". ESA/SMART-1. January 13, 2006. Retrieved 2009-05-13.
- Margot, J. L.; Campbell, D. B.; Jurgens, R. F.; Slade, M. A. (1999). "Topography of the Lunar Poles from Radar Interferometry: A Survey of Cold Trap Locations". Science 284 (5420): 1658–1660. Bibcode:1999Sci...284.1658M. doi:10.1126/science.284.5420.1658. ISSN 0036-8075. PMID 10356393.
- Spudis, P. D.; et al. (March 1995). "Physical Environment of the Lunar South Pole from Clementine data: Implications for Future Exploration of the Moon". Lunar and Planetary Science Conference. pp. 1339–1340.
- Haruyama, Junichi (2007). "The First Global Stereo Imaging of the Moon". JAXA. Retrieved 2010-01-03.
- Ingersoll, A. P.; Svitek, T.; Murray, B. C. (1992). "Stability of polar frosts in spherical bowl-shaped craters on the moon, Mercury, and Mars". Icarus 100 (1): 40–47. Bibcode:1992Icar..100...40I. doi:10.1016/0019-1035(92)90016-Z. ISSN 0019-1035.
- Blue, Jennifer (July 25, 2007). "Gazetteer of Planetary Nomenclature". USGS. Retrieved 2007-08-05.
- Bussey, Ben; Spudis, Paul (2004). The Clementine Atlas of the Moon. London: Cambridge University Press. ISBN 0-521-81528-2.
- Pieters, C. M.; et al. (March 17–21, 2003). "Science Options for Sampling South Pole-Aitken Basin" (PDF). 34th Annual Lunar and Planetary Science Conference. League City, Texas. Retrieved 2009-05-13.
- Feldman, W. C.; Maurice, S; Binder, AB; Barraclough, BL; Elphic, RC; Lawrence, DJ (1998). "Fluxes of Fast and Epithermal Neutrons from Lunar Prospector: Evidence for Water Ice at the Lunar Poles". Science 281 (5382): 1496–1500. Bibcode:1998Sci...281.1496F. doi:10.1126/science.281.5382.1496. PMID 9727973.
- Isbell, D.; Morse, D.; Rische, B. (October 13, 1999). "Moon Water Still a Mystery". Science@NASA. Retrieved 2009-05-13.
- Zakrajsek, J. J.; et al. (March 2005). "Exploration Rover Concepts and Development Challenges". NASA. Retrieved 2009-05-13.
- Campbell, B. A.; Campbell, D. B. (2006). "Regolith properties in the south polar region of the Moon from 70-cm radar polarimetry". Icarus 180 (1): 1–7. Bibcode:2006Icar..180....1C. doi:10.1016/j.icarus.2005.08.018.
- Spudis, Paul (2006). "Ice on the Moon".
- Campbell, Donald B.; Campbell, Bruce A.; Carter, Lynn M.; Margot, Jean-Luc; Stacy, Nicholas J. S. (2006-10-19). "No evidence for thick deposits of ice at the lunar south pole". Nature 443 (7113): 835–837. Bibcode:2006Natur.443..835C. doi:10.1038/nature05167. PMID 17051213.
- Haruyama, Junichi; Ohtake, M.; Matsunaga, T.; Morota, T.; Honda, C.; Yokota, Y.; Pieters, C. M.; Hara, S.; et al. (2008-10-23). "Lack of Exposed Ice Inside Lunar South Pole Shackleton Crater". Science 322 (5903): 938–939. Bibcode:2008Sci...322..938H. doi:10.1126/science.1164020. PMID 18948501.
- "月周回衛星「かぐや(SELENE)」搭載の地形カメラによる南極シャックルトンクレータ内の永久影領域の水氷存在に関する論文のサイエンスへの掲載について" (Press release) (in Japanese). JAXA. October 24, 2008. Retrieved 2009-05-13.
- McDowell, Jonathan (2008-11-15). "Jonathan's Space Report No. 603". Jonathan's Space Report. Retrieved 2008-11-16.
- "India on the moon, Chandrayaan MIP lands". Chitramala. November 15, 2008. Retrieved May 13, 2009.
- Bussey, D. B. J.; Robinson, M. S.; Spudis, P. D. (March 15–19, 2004). "Ideal Landing Sites near the Lunar Poles". 35th Lunar and Planetary Science Conference. League City, Texas. Retrieved 2009-05-13.
- Bussey, D. B. J.; Robinson, M. S.; Spudis, P. D. (October 10–19, 2002). "Design and Construction of a Lunar South Pole Infrared Telescope (LSPIRT)". 34th COSPAR Scientific Assembly, The Second World Space Congress. Houston, Texas. Retrieved 2009-05-13.
- Sharpe, Burton L.; Schrunk, David G. "Malapert Mountain Revisited". Proceedings of Space 2002: The Eighth International Conference And Exposition On Engineering, Construction, Operations, And Business In Space. pp. 129–135. Bibcode:2002spro.conf..129S.
- Kluger, Jeffry (December 5, 2006). "Promising the Moon". CNN. Retrieved May 13, 2009.
- "Best site for Moonbase revealed". BBC News. March 16, 1999. Retrieved May 13, 2009.
- Morring, F., Jr. (April 11, 2006). "NASA Sending Piggyback Impactors With Lunar Reconnaissance Orbiter". Aviation Week. Retrieved May 13, 2009.
- Foing, Bernard H.; Josset, Jean-Luc (October 20, 2006). "Shackleton crater: SMART-1’s search for light, shadow and ice at lunar South Pole". ESA/SMART-1. Retrieved 2012-03-15.
- Wood, Chuck (November 14, 2007). "A View of Our Future". Lunar Photo of the Day. Retrieved 2015-11-03.
- Koschny, Detlef; Grieger, Björn. "Taking a SMART sidelong look at Peak of Eternal Light". Europlanet Research Infrastructure. Retrieved 2009-09-24.
- "Diviner lunar south pole image". UCLA. August 2009. Retrieved March 3, 2010. | https://en.wikipedia.org/wiki/Shackleton_Crater |
4.34375 | Cosmic rays constantly bombard the Earth as tiny, extremely energetic particles traveling close to the speed of light, yet their origins have eluded scientists for nearly 100 years. A new study, however, brings the mystery a step closer to resolution.
Supernova remnants—the leftovers of massive stellar explosions—possess magnetic fields much stronger than previously thought, recent observations of pulsating X-ray hot spots reveal. Scientists said the discovery serves as some of the first direct evidence for a system powerful enough to accelerate particles into cosmic rays.
"Magnetic field strength lies at the heart of cosmic-ray acceleration theory," said Yasunobu Uchiyama, an astrophysicist with the Japan Aerospace Exploration Agency (JAXA).
Uchiyama and his colleagues' detail their findings in the Oct. 4 issue of the journal Nature.
Cosmic rays were first discovered in 1912, and since the 1960s scientists have suspected supernova remnants as their breeding grounds.
Such remnants travel through interstellar gas as they expand, producing high-speed shockwaves that can generate powerful magnetic fields. As protons, electrons and other charged particles from interstellar gas bounce around in the magnetic fields, they're accelerated to blinding speeds to create cosmic rays.
Cosmic ray factories in space work similar to Earth's particle accelerators, yet can pump particles with energies tens of thousands of times greater than the largest man-made machines.
Until Uchiyama and his team's discovery, however, magnetic fields strong enough to create cosmic rays had never been directly detected.
"Previous estimates of magnetic fields in supernova remnants were based on indirect arguments," Uchiyama said. "In our study, we determine the magnetic field in a direct manner."
X-ray hot spots
To make the discovery, Uchiyama and his team focused NASA's Chandra telescope on X-ray hot spots in a supernova remnant called RXJ1713.7-3946, located a few thousand light-years from Earth in the constellation Scorpius.
The hot spots brightened and faded in less than a year, a variability that is the hallmark of cosmic ray generation. Because the hot spots barely moved, the astrophysicists were also able to peg the speed of the supernova shockwave at 10 million mph (16 million kph).
The measurement allowed the team to gauge the strength of the remnant's particle-accelerating magnetic fields.
"This is an extremely important paper," said physicist Don Ellison of North Carolina State University, who was not involved in the study. "This is the first time such rapid X-ray variability has been seen in a supernova remnant."
- Top 10 Strangest Things in Space
- VIDEO: Supernova: Creator/Destroyer
- IMAGE: Cosmic Ray-Forming Remnant | http://www.space.com/4460-origin-cosmic-rays-confirmed.html |
4.34375 | - Role-play: Participants feel like, think like, and/or act like another individual and “act out” a particular problem or situation.
- Simulation: Participants react to a specific problem within a structured environment, for example a moot court or legislative hearing.
Although these two approaches have different qualities, they are complementary and share the following purposes of
- Furthering the development of imagination and critical thinking skills
- Promoting the expression of attitudes, opinions, and values
- Fostering participant ability to develop and consider alternative courses of action
- Developing empathy for others
- Initial activities should be simple and become increasingly complex if role-playing is to be more than a dramatic exercise.
- Do not expect polished performances initially. Give participants several opportunities to role-play and to simulate historical and contemporary situations. Vary the type of activity.
- There are four essential components to these two strategies:
- Preliminary planning and preparation by the teacher
- Preparation and training of the participants
- Active class involvement in conducting the activity
- Careful discussion and reflection about the activity
- Because participants may be uncomfortable or embarrassed, these activities should be presented in a relaxed, non-threatening atmosphere, and the participants should realize there may be more than one way to react. Practice will help participants feel more confident in these activities.
- There should be extensive debriefing and in‑depth analysis of the experience by the teacher and by the participants.
tips for role-playing
- Give participants adequate information to play roles convincingly. This preparation will make it easier for the participants to enjoy the exercise as they learn.
- Make situations and problems realistic.
- Allow participants to “jump right in.” Don’t spend time on long introductions.
- Allow participants to do a role‑reversal to look at opposing viewpoints and prevent stereotyping participants.
- Consider the following questions during the debriefing:
- Was the problem solved? Why or why not? How was it solved?
- What alternative courses of action were available?
- Is this situation similar to anything that you have experienced? | http://www.streetlaw.org/en/Page/901/RolePlay_and_Simulation |
4.375 | The DNA sequence of a gene can be altered in a number of ways. Gene mutations have varying effects on health, depending on where they occur and whether they alter the function of essential proteins. The types of mutations include:
- Missense mutation (illustration)
This type of mutation is a change in one DNA base pair that results in the substitution of one amino acid for another in the protein made by a gene.
- Nonsense mutation (illustration)
A nonsense mutation is also a change in one DNA base pair. Instead of substituting one amino acid for another, however, the altered DNA sequence prematurely signals the cell to stop building a protein. This type of mutation results in a shortened protein that may function improperly or not at all.
- Insertion (illustration)
An insertion changes the number of DNA bases in a gene by adding a piece of DNA. As a result, the protein made by the gene may not function properly.
- Deletion (illustration)
A deletion changes the number of DNA bases by removing a piece of DNA. Small deletions may remove one or a few base pairs within a gene, while larger deletions can remove an entire gene or several neighboring genes. The deleted DNA may alter the function of the resulting protein(s).
- Duplication (illustration)
A duplication consists of a piece of DNA that is abnormally copied one or more times. This type of mutation may alter the function of the resulting protein.
- Frameshift mutation (illustration)
This type of mutation occurs when the addition or loss of DNA bases changes a gene’s reading frame. A reading frame consists of groups of 3 bases that each code for one amino acid. A frameshift mutation shifts the grouping of these bases and changes the code for amino acids. The resulting protein is usually nonfunctional. Insertions, deletions, and duplications can all be frameshift mutations.
- Repeat expansion (illustration)
Nucleotide repeats are short DNA sequences that are repeated a number of times in a row. For example, a trinucleotide repeat is made up of 3-base-pair sequences, and a tetranucleotide repeat is made up of 4-base-pair sequences. A repeat expansion is a mutation that increases the number of times that the short DNA sequence is repeated. This type of mutation can cause the resulting protein to function improperly.
For more information about the types of gene mutations:
The National Human Genome Research Institute offers a Talking Glossary of Genetic Terms. This resource includes definitions, diagrams, and detailed audio descriptions of several of the gene mutations listed above.
A brief explanation of different mutation types is available from the University of Vermont.
Next: Can a change in the number of genes affect health and development? | http://ghr.nlm.nih.gov/handbook/mutationsanddisorders/possiblemutations |
4.28125 | Webmaster’s Note: This page makes extensive use of MathML and SVG to present this article. If the equations and figures do not appear properly, please consult the original version published by Bernd Schneider.
1.1 Classical Physics
This chapter summarizes some very basic theorems of physics, mostly predating the theories of Special Relativity and of General Relativity.
Newton’s Laws of Motion
Isaac Newton discovered the following laws that are still valid (meaning that they are a very good approximation) for speeds much slower than the speed of light.
An object at rest or in uniform motion in a straight line will remain at rest or in the same uniform motion unless acted upon by an unbalanced force. This is also known as the law of inertia.
The acceleration a of an object is directly proportional to the total unbalanced force F exerted on the object, and is inversely proportional to the mass m of the object (in other words, as mass increases, the acceleration has to decrease). The acceleration of an object moves in the same direction as the total force. This is also known as the law of acceleration.
If one object exerts a force on a second object, the second object exerts a force equal in magnitude and opposite in direction on the object body. This is also known as the law of interaction.
Two objects with a mass of m₁ and m₂, respectively, and a distance of r between the centers of mass attract each other with a force F of:
is Newton’s constant of gravitation. If an object with a mass m of much less than Earth’s mass is close to Earth’s surface, it is convenient to approximate Eq. 1.2 as follows:
Here g is an acceleration slightly varying throughout Earth’s surface, with an average of .
In an isolated system, the total momentum is constant. This fundamental law is not affected by the theories of Relativity. Energy conservation In an isolated system, the total energy is constant. This fundamental law is not affected by the theories of Relativity.
Second Law of Thermodynamics
The overall entropy of an isolated system is always increasing. Entropy generally means disorder. An example is the heat flow from a warmer to a colder object. The entropy in the colder object will increase more than it will decrease in the warmer object. This why the reverse process, leading to lower entropy, would never take place spontaneously.
If the source of the wave is moving relative to the receiver or the other way round, the received signal will have a different frequency than the original signal. In the case of sound waves, two cases have to be distinguished. In the first case, the signal source is moving with a speed v relative to the medium, mostly air, in which the sound is propagating at a speed w:
f is the resulting frequency, f₀ the original frequency. The plus sign yields a frequency decrease in case the source is moving away, the minus sign an increase if the source is approaching. If the receiver is moving relative to the air, the equations are different. If v is the speed of the receiver, then the following applies to the frequency:
Here the plus sign denotes the case of an approaching receiver and an according frequency increase; the minus sign applies to a receiver that moves away, resulting in a lower frequency.
The substantial difference between the two cases of moving transmitter and moving receiver is due to the fact that sound needs air in order to propagate. Special Relativity will show that the situation is different for light. There is no medium, no “ether” in which light propagates and the two equations will merge to one relativistic Doppler shift.
In addition to Einstein’s equivalence of mass and energy, de Broglie unified the two terms in that any particle exists not only alternatively but even simultaneously as matter and radiation. A particle with a mass m and a speed v was found to be equivalent to a wave with a wavelength lambda. With h, Planck’s constant, the relation is as follows:
The best-known example is the photon, a particle that represents electromagnetic radiation. The other way around, electrons, formerly known to have particle properties only, were found to show a diffraction pattern which would be only possible for a wave. The particle-wave dualism is an important prerequisite to quantum mechanics.
1.2 Special Relativity
Special Relativity (SR) doesn’t play a role in our daily life. Its impact becomes apparent only for speed differences that are considerable fractions of the speed of light, c. I will henceforth occasionally refer to them as “relativistic speeds”. The effects were first measured as late as towards the end of the 19th century and explained by Albert Einstein in 1905.
There are many approaches in literature and in the web to explain Special Relativity. Please refer to the appendix. A very good reference from which I have taken several suggestions is Jason Hinson’s article on Relativity and FTL Travel. You may wish to read his article in parallel.
The whole theory is based on two postulates:
There is no invariant “fabric of space” relative to which an absolute speed could be defined or measured. The terms “moving” or “resting” make only sense if they refer to a certain other frame of reference. The perception of movement is always mutual; the starship pilot who leaves Earth could claim that he is actually resting while the solar system is moving away.
The speed of light, in the vacuum, is the same in all directions and in all frames of reference. This means that nothing is added or subtracted to this speed, as the light source apparently moves.
Frames of Reference
In order to explain Special Relativity, it is necessary to introduce frames of reference. Such a frame of reference is basically a point-of-view, something inherent to an individual observer who sees an event from a certain angle. The concept is in some way similar to the trivial spatial parallax where two or more persons see the same scene from different spatial angles and therefore give different descriptions of it. However, the following considerations are somewhat more abstract. “Seeing” or “observing” will not necessarily mean a sensory perception. On the contrary, the observer is assumed to account for every “classic” measurement error such as signal delay or Doppler shift.
Aside from these effects that can be rather easily handled there is actually one even more severe restriction. The considerations on Special Relativity require inertial frames of reference. According to the definition in the General Relativity chapter, this would be a floating or free falling frame of reference. Any presence of gravitational or acceleration forces would not only spoil the measurement, but even question the validity of the SR. One provision for the following considerations is that all observers should float within their starships in space so that they can be regarded as local inertial frames. Basically, every observer has their own frame of reference; two observers are in the same frame if their relative motion to a third frame is the same, regardless of their distance.
The concept of a four-dimensional space-time has already been briefly explained in the GR chapter. Since in an inertial frame all the cartesic spatial coordinates x, y and z are equivalent (for instance, there is no “up” and “down” in space), we may replace the three axes with one generic horizontal space (x-) axis. Together with the vertical time (t-) axis we obtain a two-dimensional diagram (Fig. 1.1). It is very convenient to give the distance in light years and the time in years. Irrespective of the current frame of reference, the speed of light always equals c and would be exactly 1 ly per year in our diagram, according to the second postulate. The beam will therefore always form an angle of either 45º or -45º with the x-axis and the t-axis, as indicated by the yellow lines.
A resting observer O draws a perpendicular x-t-diagram. The x-axis is equivalent to t=0 and is therefore a line of simultaneity, meaning that for O everything located on this line is simultaneous. This applies to every parallel line t=const. likewise. The t-axis and every line parallel to it denote x=const. and therefore no movement in this frame of reference. If O is to describe the movement of another observer O* with respect to himself, O*’s time axis t* is sloped, and the reciprocal slope indicates a certain speed . Fig. 1.2 shows the O’s coordinate system in gray, and O*’s in white. At the first glance it seems strange that O*’s x*-axis is sloped into the opposite direction than his t*-axis.
The x*-axis can be explained by assuming two events A and B occurring at in some distance to the two observers, as depicted in Fig. 1.3. O* sees them simultaneously, whereas O sees them at different times. Since the two events are simultaneous in O*’s frame, the line A-0-B defines his x*-axis. A and B might be located anywhere on the 45-degree light paths traced back from the point “O* sees A&B”, so we need further information to actually localize A and B. Since O* is supposed to see them at the same time (and not only date them back to ), we also know that the two events A and B must have the same distance from the origin of the coordinate system. Now A and B are definite, and connecting them yields the x*-axis. Some simple trigonometry would reveal that actually the angle between x* and x is the same as between t* and t, only the direction is opposite.
The faster the moving observer is, the closer will the two axes t* and x* move to each other. It is obvious that finally, at , they will merge to one single axis, equivalent to the path of a light beam.
The above space-time diagrams don’t have a scale on the x*- and t*-axes so far. The method of determining the t*-scale is illustrated in the left half of Fig. 1.4. When the moving observer O* passes the resting observer O, they both set their clocks to and , respectively. Some time later, O’s clock shows , and at a yet-unknown instant O*’s clock shows . The yellow light paths show when O will actually see O*’s clock at , and vice versa. If O is smart enough, he may calculate the time when this light was emitted (by tracing back the -45º yellow line to O*’s t*-axis). His lines of simultaneity are exactly horizontal (red line), and there constructed event “” will take place at some yet unknown time on his t-axis. The quotation marks distinguish O’s reconstruction of “” and O*’s direct reading . O* will do the same by reconstructing the event “” (green line). Since O*’s x*-axis and therefore the green line is sloped, it is impossible that the two events and “” and the two events and “” are simultaneous on the respective axis.
If there is no absolute simultaneity, at least one of the two observers would see the other one’s time dilated (slow motion) or compressed (fast motion). Now we have to apply the first postulate, the principle of relativity. There must not be any preferred frame of reference, all observations have to be mutual. This means that either observer would see the other one’s time dilated by the same factor. In our diagram the red and the green line have to cross, and the ratio “” to has to be equal to “” to . Some further calculations yield the following time dilation:
Note: When drawing the axes to scale in an x-t diagram, one has to account for the inherently longer hypotenuse t* and multiply the above formula with an additional factor cos alpha to “project” t* on t.
Note that the time dilation would be the square root of a negative number (imaginary), if we assume an FTL speed . Imaginary numbers are not really forbidden, on the contrary, they play an important role in the description of waves. Anyway, a physical quantity such as the time doesn’t make any sense once it gets imaginary. Unless a suited interpretation or a more comprehensive theory is found, considerations end as soon as a time (dilation) that has to be finite and real by definition would become infinitely large, infinitely small or imaginary. The same applies to the length contraction and mass increase. Warp theory circumvents all these problems in away that no such relativistic effects occur.
The considerations for the scale of the x*-axis are similar as those for the t*-axis. They are illustrated in the bottom portion of Fig. 1.4. Let us assume that O and O* both have identical rulers and hold their left ends when they meet at . Their right ends are at on the x-axis and at on the x*-axis, respectively. O and his ruler rest in their frame of reference. At (which is not simultaneous with at the right end of the ruler!) O* obtains a still unknown length “” for O’s ruler (green line). O* and his ruler move along the t*-axis. At , O sees an apparent length “” of O*’s ruler (red line). Due to the slope of the t*-axis, it is impossible that the two observers mutually see the same length l for the other ruler. Since the relativity principle would be violated in case one observer saw two equal lengths and the other one two different lengths, the mutual length contraction must be the same. Note that the geometry is virtually the same as for the time dilation, so it’s not astounding that length contraction is determined by the factor gamma too:
Once again, note that when drawing the x*-axis to scale, a correction is necessary, a factor of cos alpha to the above formula.
Addition of Velocities
One of the most popular examples used to illustrate the effects of Special Relativity is the addition of velocities. It is obvious that in the realm of very slow speeds it’s possible to simply add or subtract velocity vectors from each other. For simplicity, let’s assume movements that take place in only one dimension so that the vector is reduced to a plus or minus sign along with the absolute speed figure, like in the space-time diagrams. Imagine a tank that as a speed of v compared to the ground and to an observer standing on the ground (Fig. 1.5). The tank fires a projectile, whose speed is determined as w by the tank driver. The resting observer, on the other hand, will register a projectile speed of relative to the ground. So far, so good.
The simple addition (or subtraction, if the speeds have opposite directions) seems very obvious, but it isn’t so if the single speeds are considerable fractions of c. Let’s replace the tank with a starship (which is intentionally a generic vessel, no Trek ship), the projectile with a laser beam and assume that both observers are floating, one in open space and one in his uniformly moving rocket, at a speed of compared to the first observer (Fig. 1.6). The rocket pilot will see the laser beam moving away at exactly c. This is still exactly what we expect. However, the observer in open space won’t see the light beam travel at but only at c. Actually, any observer with any velocity (or in any frame of reference) would measure a light speed of exactly .
Space-time-diagrams allow to derive the addition theorem for relativistic velocities. The resulting speed u is given by:
For we may neglect the second term in the denominator and obtain , as we expect it for small speeds. If gets close to , we get a speed u that is close to, but never equal to or even faster than c. Finally, if either v or w equals c, u is equal to c as well. There is obviously something special to the speed of light. c always remains constant, no matter where in which frame and which direction it is measured. c is also the absolute upper limit of all velocity additions and can’t be exceeded in any frame of reference.
Mass is a property inherent to any kind of matter. One may distinguish two forms of mass, one that determines the force that has to be applied to accelerate an object (inert mass) and one that determines which force it experiences in a gravitational field (heavy mass). At latest since the equivalence principle of GR they have been found to be absolutely identical.
However, mass is apparently not an invariant property. Consider two identical rockets that started together at and now move away from the launch platform in opposite directions, each with an absolute speed of w. Each pilot sees the launch platform move away at w, while Eq. 1.9 shows us that the two ships move away from each other at a speed . The “real” center of mass of the whole system of the two ships would be still at the launch platform, however, each pilot would see a center of mass closer to the other ship than to his own. This may be interpreted as a mass increase of the other ship to m compared to the rest mass m0 measured for both ships prior to the launch:
This function is plotted in Fig. 1.7.
So each object has a rest mass m₀ and an additional mass due to its speed as seen from another frame of reference. This is actually a convenient explanation for the fact that the speed of light cannot be reached. The mass increases more and more as the object approaches c, and so would the required momentum to propel the ship.
Finally, at , we would get an infinite mass, unless the rest mass m₀ is zero. The latter must be the case for photons which actually move at the speed of light, which even define the speed of light. If we assume an FTL speed , the denominator will be the square root of a negative number, and therefore the whole mass will be imaginary. As already stated for the time dilation, there is not yet a suitable theory how an imaginary mass could be interpreted. Anyway, warp theory circumvents this problem in that the mass neither gets infinite nor imaginary.
Let us consider Eq. 1.10 again. It is possible to express it as follows:
It is obvious that we may neglect the third order and the following terms for slow speeds. If we multiply the equation with we obtain the familiar Newtonian kinetic energy plus a new term . Obviously already the resting object has a certain energy content . We get a more general expression for the complete energy E contained in an object with a rest mass m₀ and a moving mass m, so we may write (drumrolls!):
Energy E and mass m are equivalent; the only difference between them is the constant factor c². If there is an according energy to each given mass, can the mass be converted to energy? The answer is yes, and Trek fans know that the solution is a matter/antimatter reaction in which the two forms of matter annihilate each other, thereby transforming their whole mass into energy.
Let us have a look at Fig. 1.1 again. There are two light beams running through the origin of the diagram, one traveling in positive and one in negative x direction. The slope is 1 ly per year and equals c. If nothing can move faster than light, then every t*-axis of a moving observer and every path of a message sent from one point to another in the diagram must be steeper than these two lines. This defines an area “inside” the two light beams for possible signal paths originating at or going to (, ). This area is marked dark blue in Fig. 1.8. The black area is “outside” the light cone. The origin of the diagram marks “here ()” and “now ()” for the resting observer.
The common-sense definition would be that “future” is any event at , and past is any event at . Special Relativity shows us a different view of these two terms. Let us consider the four marked events which could be star explosions (novae), for instance. Event A is below the x-axis and within the light cone. It is possible for the resting observer O to see or to learn about the event in the past, since a -45º‚ light beam would reach the t-axis at about one and a half years prior to . Therefore this event belongs to O’s past. Event B is also below the x-axis, but outside the light cone. The event has no effect on O in the present, since the light would need almost another year to reach him. Strictly speaking, B is not in O’s past. Similar considerations are possible for the term “future”. Since his signal wouldn’t be able to reach the event C, outside the light cone, in time, O is not able to influence it. It’s not in his future. Event D, on the other hand, is inside the light cone and may therefore be cause or influenced by the observer.
What about a moving observer? One important consequence of the considerations in this whole chapter was that two different observers will disagree about where and when a certain event happens. The light cone, on the other hand, remains the same, irrespective of the frame of reference. So even if two observers meeting at have different impressions about simultaneity, they will agree that there are certain, either affected (future) or affecting (past) events inside the light cone, and outside events they shouldn’t bother about.
1.3 Twin Paradox
The considerations about the time dilation in Special Relativity had the result that the terms “moving observer” and “resting observer” are interchangeable as are their space-time diagrams. If there are two observers with a speed relative to each other, either of them will see the other one move. Either observer will see the other one’s clock ticking slower. Special Relativity necessarily requires that the observations are mutual, since it forbids a preferred, absolutely resting frame of reference. Either clock is slower than the other one? How is this possible?
Specifically, the twin paradox is about twins, of whom one travels to the stars at a relativistic speed while the other one stays on Earth. It is obvious that the example assumes twins, since it would be easier to see if one of them actually looks older than the other one when they meet again. Anyway, it should work with unrelated persons as well. What happens when the space traveler returns to Earth? Is he the younger one, or maybe his twin on Earth, or are they equally old?
The following example for the twin paradox deliberately uses the same figures as Jason Hinson’s excellent treatise on Relativity and FTL Travel, to increase the chance of understanding it.
To anticipate the result, the space traveler will be the younger one when he returns. The solution is almost trivial. Time dilation only remains the same, as long as both observers stay in their respective frames of reference. However, if the two observers want to meet again, one of them or both of them have to change their frame(s) of reference. In this case it is the space traveler who has to decelerate, turn around, and accelerate his starship in the other direction. It is important to note that the whole effect can be explained without referring to any General Relativity effects. Time dilation attributed to acceleration or gravity will change the result, but it will not play a role in the following discussion. The twin paradox is no paradox, it can be solved, and this is best done with a space-time diagram.
Part 1: Moving Away from Earth
Fig. 1.9 shows the first part of the FTL travel. O is the “resting” observer who stays on Earth the whole time. Earth is subsequently regarded as an approximated inertial frame. Strictly speaking, O would have to float in Earth’s orbit, according to the definition in General Relativity. Once again, however, it is important to say that the following considerations don’t need General Relativity at all. I only refer to O as staying in an inertial frame so as to exclude any GR influence.
The moving observer O* is supposed to travel at a speed of 0.6c relative to Earth and O. When O* passes by O (), they both set their clocks to zero (). So the origin of their space-time diagrams is the same, and the time dilation will become apparent in the different times t* and t for simultaneous events. As outlined above, t* is sloped, as is x* (see also Fig. 1.3). The measurement of time dilation works as outlined in Fig. 1.4. O’s lines of simultaneity are parallel to his x-axis and perpendicular to his t-axis. He will see that 5 years on his t-axis correspond with only 4 years on the t*-axis (red arrow), because the latter is stretched according to Eq. 1.7. Therefore O*’s clock is ticking slower from O’s point-of-view. The other way round, O* draws lines of simultaneity parallel to his sloped x*-axis and he reckons that O’s clock is running slower, 4 years on his t*-axis compared to 3.2 years on the t-axis (green arrow). It is easy to see that the mutual dilation is the same, since equals . Who is correct? Answer: Both of them, since they are in different frames of reference, and they stay in these frames. The two observers just see things differently; they wouldn’t have to care whether their perception is “correct” and the other one is actually aging slower, unless they wanted to meet again.
Part 2: Resting in Space
Now let us assume that O* stops his starship when his clock shows years, maybe to examine a phenomenon or to land on a planet. According to Fig. 1.10 he is now resting in space relative to Earth and his new coordinate system is parallel to the x-t system of O on Earth. O* is now in the same frame of reference as O. And this is exactly the point: O*’s clock still shows 4 years, and he notices that not 3.2 years have elapsed on Earth as briefly before his stop, but 5 years, and this is exactly what O says too. Two observers in the same frame agree about their clock readings. O* has been in a different frame of reference at 0.6c for 4 years of his time and 5 years of O’s time. This difference becomes a permanent offset when O* enters O’s frame of reference. Paradox solved.
It is obvious that the accumulative dilation effect will become the larger the longer the travel duration is. Note that O’s clock has always been ticking slower in O*’s moving frame of reference. The fact that O’s clock nevertheless suddenly shows a much later time (namely 5 instead of 3.2 years) is solely attributed to the fact that O* is entering a frame of reference in which exactly these 5 years have elapsed.
Once again, it is crucial to annotate that the process of decelerating would only change the result qualitatively, since there could be no exact kink, as O* changes from t* to t**. Deceleration is no sudden process, and the transition from t* to t** should be curved. Moreover,the deceleration itself would be connected with a time dilation according to GR, but the paradox is already solved without taking this into account.
Part 3: Return to Earth
Let us assume that at years, O* suddenly gets homesick and turns around instead of just resting in space. His relative speed is during his travel back to Earth, the minus sign indicating that he is heading to the negative x direction. It is obvious that this second part of his travel should be symmetrical to the first part at in Fig. 1.9, the symmetry axis being the clock comparison at years and years. This is exactly the moment when O* has covered both half of his way and half of his time.
Fig. 1.11 demonstrates what happens to O*’s clock comparison. Since he is changing his frame of reference from to relative to Earth, the speed change and therefore the effect is twice as large as in Fig. 1.10. Assuming that O* doesn’t stop for a clock comparison as he did above, he would see that O’s clock directly jumps jumps from 3.2 years to 6.8 years. Following O*’s travel back to Earth, we see that the end time is years (O*’s clock) and years (O’s clock). The traveling twin is actually two years younger.
We could imagine several other scenarios in which O might catch up with the traveling O*, so that O is actually the younger one. Alternatively, O* could stop in space, and O could do the same travel as O*, so that they would be equally old when O reaches O*. The analysis of the twin paradox shows that the simple statement “moving observers age slower” is not sufficient. The statement has to be modified in that “moving observers age slower as seen from a different frame of reference, and they notice it when they enter this frame themselves”.
1.4 Causality Paradox
As already stated further above, two observers in different frames of reference will disagree about the simultaneity of certain events (see Fig. 1.3). The same event might be in one observer’s future, but in another observer’s past when they meet each other. This is not a problem in Special Relativity, since no signal is allowed to travel faster than light. Any event that could be theoretically influenced by one observer, but has already happened for the other one, is outside the light cone depicted in Fig. 1.8. Causality is preserved.
Fig. 1.12 depicts the space-time diagrams of two observers with a speed relative to each other. Let us assume the usual case that the moving observer O* passes by the resting observer O at . They agree about the simultaneity of this passing event, but not about any other event at or . Event A is below the t-axis, meaning that it occurs in O’s past, but above the t*-axis and therefore in O*’s future. This doesn’t matter as long as they can send and receive only STL signals. Event A is outside the light cone, and the argumentation would be as follows: A is in O*’s future, but he has no means of influencing it at , since his signal couldn’t reach it in time. A is in O’s past, but it doesn’t play a role, since he can’t know of it at .
What would be different if either FTL travel or FTL signal transfer were possible? In this case we would be allowed to draw signal paths of less than 45º steepness in the space-time diagram. Let us assume that O* is able to send an FTL signal to influence or to cause event A in the first place, just when the two observers pass each other. Note that this signal would travel at in any frame of reference, and that it would travel back in time in O’s frame, since it runs into negative t-direction in O’s orthogonal x-t coordinate system, to an event that is in O’s past. If O* can send an FTL signal to cause the event A, then a second FTL signal can be sent to O to inform him of A as soon as it has just happened. This signal would run at in positive t-direction for O, but in negative t*-direction for O*. So the situation is exactly inverse to the first FTL signal. Now O is able to receive a message from O*’s future.
The paradox occurs when O, knowing about the future, decides to prevent A from happening. Maybe O* is a bad guy, and event A is the death of an unfortunate victim, killed because of his FTL message. O would have enough time to hinder O*, to warn the victim or to take other precautions, since it is still when he receives the message, and O* has not yet caused event A.
The sequence of events (in logical rather than chronological order) would be as follows:
At , the two observers pass each other and O* sends an FTL message that causes A.
A happens in O*’s past () and in O’s future ().
O learns about event A through another FTL signal, still at , before he meets O*.
O might be able to prevent A from happening. However, how could O have learned about A, if it actually never happened?
This is obviously another version of the well-known grandfather paradox. Note that these considerations don’t take into account which method of FTL travel or FTL signal transfer is used. Within the realm of Special Relativity, they should apply to any form of FTL travel. Anyway, if FTL travel is feasible, then it is much like time travel. It is not clear how this paradox can be resolved. The basic suggestions are the same as for generic time travel and are outlined in my time travel article.
1.5 Other Obstacles to Interstellar Travel
Rocket propulsion (as a generic term for any drive using accelerated particles) can be described by momentum conservation, resulting in the following simple equation:
The left side represents the infinitesimal speed increase (acceleration) dv of the ship with a mass m, the right side is the mass decrease -dm of the ship if particles are thrusted out at a speed w. This would result in a constant thrust and therefore in a constant acceleration, at least in the range of ship speeds much smaller than c. Eq. 1.13 can be integrated to show the relation between an initial mass m₀, a final mass m₁ and a speed v₁ to be achieved:
The remaining mass m₁ at the end of the flight, the payload, is only a fraction of the total mass m₀, the rest is the necessary fuel. The achievable speed v₁ is limited by the speed w of the accelerated particles, i.e. the principle of the drive, and by the fuel-to-payload ratio.
Let us assume a photon drive as the most advanced conventional propulsion technology, so that w would be equal to c, the speed of light. The fuel would be matter and antimatter in the ideal case, yielding an efficiency near 100%, meaning that according to Eq. 1.14 almost the complete mass of the fuel could contribute to propulsion. Eq. 1.13 and Eq. 1.14 would remain valid, with . If relativistic effects are not yet taken into account, the payload could be as much as 60% of the total mass of the starship, if it’s going to be accelerated to 0.5c. However, the mass increase at high sublight speeds as given in Eq. 1.10 spoils the efficiency of any available propulsion system as soon as the speed gets close to c, since the same thrust will effect a smaller acceleration. STL examples will be discussed in section 1.1.7.
Acceleration and Deceleration
Eq. 1.13 shows that the achievable speed is limited by the momentum (speed and mass) of the accelerated particles, provided that a conventional rocket drive is used. The requirements of such a drive, e.g. the photon drive outlined above, are that a considerable amount of particles has to be accelerated to a high speed at a satisfactory efficiency.
Even more restrictive, the human body simply couldn’t sustain accelerations of much more than , which is the acceleration on Earth’s surface. Accelerations of several g are taken into account in aeronautics and astronautics only for short terms, with critical peak values of up to 20g. Unless something like Star Trek’s IDF (inertial damping field) will be invented [Ste91], it is probably the most realistic approach to assume a constant acceleration of g from the traveler’s viewpoint during the whole journey. This would have the convenient side effect that an artificial gravity equal to Earth’s surface would be automatically created.
According to Newton’s first and second postulates it will be necessary to decelerate the starship as it approaches the destination. Thus, the starship needs a “brake”. It wouldn’t be very wise to install a second, equally powerful engine at the front of the starship for this purpose. Moreover, the artificial gravity would act in the opposite direction during the deceleration phase in this case. The alternative solution is simple: Half-way to the destination, the starship would be simply turned around by means of maneuvering thrusters so that it now decelerates at a rate of g; the artificial gravity would remain exactly the same. Actually, complying with the equivalence principle of General Relativity, if the travelers didn’t look out of the windows or at their sensor readings, they wouldn’t even notice that the ship is now decelerating. Only during the turn-around the gravity would change for a brief time, if the main engines are switched off. Fig. 1.13 depicts such a turn-around ship, “1.” is the acceleration phase, “2.” the turn-around, “3.” the deceleration.
In the chapter on classical physics, the Doppler effect has been described as the frequency increase or decrease of a sound wave. The two cases of a moving source and moving observer only have to be distinguished in case of an acoustic signal, because the speed of sound is constant relative to the air and therefore the observer would measure different signal speeds in the two cases. Since the speed of light is constant, there is only one formula for Doppler shift of electromagnetic radiation, already taking into account SR time dilation:
Note that Eq. 1.15 covers both cases of frequency increase (v and c in opposite directions, ) and frequency decrease (v and c in the same direction, ). Since the power of the radiation is proportional to its frequency, the forward end will be subject to a higher radiation power and dose than the rear end, assuming isotropic (homogeneous) radiation.
Actually, for STL travel the Doppler shift is not exactly a problem. At 0.8c, for instance, the energy at the bow is three times the average of the isotropic radiation. Visible light would “mutate” to UV radiation, but intensity would still be far from dangerous. Only if v gets very close to c (or -c, to be precise), the situation could get critical for the space travelers, and an additional shielding would be necessary. On the other hand, it’s useless anyway to get as close to c as possible because of the mass increase. For , the Doppler shift would be theoretically infinite.
It is not completely clear (but may become clear in one of the following chapters) how Doppler shift can be described for an FTL drive in general or warp propulsion in particular. It could be the conventional, non-relativistic Doppler shift that applies to warp drive since mass increase and time dilation are not valid either. In this case the radiation frequency would simply increase to times the original frequency, and this could be a considerable problem for high warp speeds, and it would require thick radiation shields and forbid forward windows.
1.6 General Relativity
As the name suggests, General Relativity (GR) is a more comprehensive theory than Special Relativity. Although the concept as a whole has to be explained with massive use of mathematics, the basic principles are quite evident and perhaps easier to understand than those of Special Relativity. General Relativity takes into account the influence of the presence of a mass, of gravitational fields caused by this mass.
The chapter on Special Relativity assumed inertial frames of reference, that is, frames of reference in which there is no acceleration to the object under investigation. The first thought might be that a person standing on Earth’s surface should be in an inertial frame of reference, since he is not accelerated relative to Earth. This idea is wrong, according to GR. Earth’s gravity “spoils” the possible inertial frame. Although this is not exactly what we understand as “acceleration”, there can’t be an inertial frame on Earth’s surface. Actually, we have to extend our definition of an inertial frame.
Principle of Equivalence
Consider the rocket in the left half of Fig. 1.14 whose engines are powered somewhere in open space, faraway from any star or planet. According to Newton’s Second Law of Motion, if the engine force is constant the acceleration will be constant too. The thrust may be adjusted in a way that the acceleration is exactly , equal to the acceleration in Earth’s gravitational field. The passenger will then be able to stand on the rocket’s bottom as if it were Earth’s surface, since the floor of the rocket exerts exactly the same force on him in both cases. Compare it to the right half of Fig. 1.14; the two situations are equivalent. “Heavy mass” and “inert mass” are actually the same.
One might object that there should still be many differences. Specifically one should expect that a physicist who is locked up in such a starship (without windows), should be able to find out whether it is standing on Earth or accelerating in space. The surprising result of GR is that he will get exactly the same experimental results in both cases. Imagine that the rocket is quite long, and our physicist sends out a laser beam from the rocket’s bottom to its top. In the case of the accelerating rocket we would not be surprised that the frequency of the light beam decreases, since the receiver would virtually move away from the source while the beam is on the way. This effect is the familiar Doppler shift. We wouldn’t expect the light frequency (and therefore its intensity) to decrease inside a stationary rocket too, but that’s exactly what happens in Earth’s gravitational field. The light beam has to “climb up” in the field, thereby losing energy, which becomes apparent in a lower frequency. Obviously, as opposed to common belief so far, light is affected by gravity.
Let us have a look at Fig. 1.15. The left half shows a rocket floating in space, far away from any star or planet. No force acts upon the passenger, he is weightless. It’s not only a balance of forces, but all forces are actually zero. This is an inertial frame, or at least a very good approximation. There can obviously be no perfect inertial frame as long as there is still a certain mass present. Compare this to the depiction of the free falling starship in the right half of Fig. 1.15. Both the rocket and the passenger are attracted with the same acceleration . Although there is acceleration, this is an inertial frame too, and it is equivalent to the floating rocket. The point is that in both cases the inside of the rocket is an inertial frame, since the ship and passenger don’t exert any force/acceleration on each other. About the same applies to a parabolic flight or a ship in orbit.
We might have found an inertial frame also in the presence of a mass, but we have to keep in mind that this can be only an approximation. Consider a very long ship falling down to Earth. The passenger in the rocket’s top would experience a smaller acceleration than the rocket’s bottom and would have the impression that the bottom is accelerated with respect to himself. Similarly, in a very wide rocket (it may be the same one, only turned by 90º), two people at either end would see that the other one is accelerated towards him. This is because they would fall in slightly different radial directions to the center of mass. None of these observations would be allowed within an inertial frame. Therefore, we are only able to define local inertial frames.
As already mentioned above, there is a time dilation in General Relativity, because light will gain or lose potential energy when it is moving farther away from or closer to a center of mass, respectively. The time dilation depends on the gravitational potential as given in Eq. 1.2 and amounts to:
G is Newton’s gravitational constant, M is the planet’s mass, and r is the distance from the center of mass. Eq. 1.16 can be approximated in the direct vicinity of the planet using Eq. 1.3:
In both equations t* is the time elapsing on the surface, while t is the time in a height h above the surface, with g being the standard acceleration. The time t* is always shorter than t so that, theoretically, people living on the sea level age slower than those in the mountains. The time dilation has been measured on normal plane flights and summed up to 52.8 nanoseconds when the clock on the plane was compared to the reference clock on the surface after 40 hours [Sex87]. 5.7 nanoseconds had to be subtracted from this result, since they were attributed to the time dilation of relativistic movements that was discussed in the chapter about Special Relativity.
The time dilation in the above paragraph goes along with a length contraction in the vicinity of a mass:
However, length contraction is not a commonly used concept in GR. The equivalent idea of a curved space is usually preferred. For it is obviously impossible to illustrate the distortions of a four-dimensional space-time, we have to restrict our considerations to two spatial dimensions. Imagine two-dimensional creatures living on an even plastic foil. The creatures might have developed a plane geometry that allows them to calculate distances and angles. Now someone from the three-dimensional world bends and stretches the plastic foil. The two-dimensional creatures will be very confused about it, since their whole knowledge of geometry doesn’t seem to be correct anymore. They might apply correction factors in certain areas, compensating for points that are measured as being closer together or farther away from each other than their calculations indicate. Alternatively, a very smart two-dimensional scientist might come up with the idea that their area is actually not flat but bended. About this is what General Relativity says about our four-dimensional space-time.
Fig. 1.16 is limited to the two spatial dimensions x and y. It can be regarded as something like a “cross-section” of the actual spatial distortion. We can imagine that the center of mass is somewhere in the middle “underneath” the x-y plane, where the curvature is most pronounced (a “gravity well”).
Speed of Light
A light beam passing by an area with strong gravity such as a star will not be “straight”, at least it will not appear straight as seen from flat space. Using an exact mathematical description of curved space, the light beam will follow a geodesic. Using a more illustrative idea, because of its mass the light beam will be deflected by a certain angle. The first reliable measurements were performed during a total solar eclipse in 1919 [Sex87]. They showed that the apparent positions of stars whose light was passing the darkened sun were farther away from the sun than the “real” positions measured in the night. It is possible to calculate the deflection angle assuming that light consists of particles and using the Newtonian theory of gravitation, however, this accounts for only half the measured value.
There is another effect involved that can only be explained with General Relativity. As it is the case in materials with different refraction indices, light will “avoid” regions in which its apparent speed is reduced. A “detour” may therefore become a “shortcut”. This is what happens in the vicinity of a star and what is responsible for the other 50% of the light deflection.
Now we can see the relation of time dilation, length contraction, the geometry of space and the speed of light. A light beam would have a definite speed in the “flat” space in some distance from the center of mass. Closer to the center, space itself is “curved”, and this again is equivalent to the effect that everything coming from outside would apparently “shrink”. A ruler with a length r would appear shortened to r*. Since the time t* is shortened with respect to t by the same factor, remains constant. This is what an observer inside the gravity well would measure, and what the external observer would confirm. On the other hand, the external observer would see that the light inside the gravity well takes a detour (judging from his geometry of flat space) or would pass a smaller effective distance in the same time (regarding the shortened ruler) and the light beam would be additionally slowed down because of the time dilation. Thus, he would measure that the light beam actually needs a longer time to pass by the gravity well than t=r/c. If he is sure about the distance r, then the effective c* inside the gravity well must be smaller:
It was confirmed experimentally that a radar signal between Earth and Venus takes a longer time than the distance between the planets indicates if it passes by close to the sun.
Let us have alook at Eq. 1.16 and Eq. 1.18 again. Obviously something strange happens at . The time t* becomes zero (and the time dilation infinite), and the length x* is contracted to zero. This is the Schwarzschild radius or event horizon, a quantity that turns up in several other equations of GR. A collapsing star whose radius shrinks below this event horizon will become a black hole. Specifically, the space-time inside the event horizon is curved in a way that every particle will inevitably fall into its center. It is unknown how dense the matter in the center of a black hole is actually compressed. The mere equations indicate that the laws of physics as we know them wouldn’t be valid any more (singularity). On the other hand, it doesn’t matter to the outside world what is going on inside the black hole, since it will never be possible to observe it.
The sequence of events as a starship passenger approaches the event horizon is illustrated in Fig. 1.17. The lower left corner depicts what an external observer would see, the upper right corner shows the perception of the person who falls into the black hole. Entering the event horizon, he would get a distorted view of the outside world at first. However, while falling towards the center, the starship and its passenger would be virtually stretched and finally torn apart because of the strong gravitational force gradient. An external observer outside the event horizon would perceive the starship and its passenger move slower the closer they get to the event horizon, corresponding to a time dilatation. Eventually, they would virtually seem to stand still exactly on the edge of the event horizon. He would never see them actually enter the black hole. By the way, this is also a reason why a black hole can never appear completely “black”. Depending on its age, the black hole will still emit a certain amount of (red-shifted) radiation, aside from the Hawking radiation generated because of quantum fluctuations at its edge.
1.7 Examples of Relativistic Travel
A Trip to Proxima Centauri
As already mentioned in the introduction, it is essential to overcome the limitations of Special Relativity to allow sci-fi stories to take place in interstellar space. Otherwise the required travel times would exceed a person’s lifespan by far. Several of the equations and examples in this sub-chapter are taken from [Ger89].
Let us assume a starship with a very advanced, yet slower-than-light (STL) drive, were to reach Proxima Centauri, about 4 ly away from Earth. This would impose an absolute lower limit of 4 years on the one-way travel. However, considering the drastic increase of mass as the ship approaches c, an enormous amount of energy would be necessary. Moreover, we have to take into account a limited engine power and the limited ability of humans to cope with excessive accelerations. A realistic STL travel to a nearby star system could work with a turn-around starship as shown in Fig. 1.13 which will be assumed in the following.
To describe the acceleration phase as observed from Earth’s frame of reference, the simple relation for non-relativistic movements has to be modified as follows:
It’s not surprising that the above formula for the effective acceleration is also determined by the factor gamma (γ), yet, it requires a separate derivation that I don’t further explain to keep this chapter brief.
The relativistic and (hypothetical) non-relativistic speeds at a constant acceleration of are plotted over time in Fig. 1.18. It would take 409 days to achieve 0.5c and 2509 days to achieve 0.99c at a constant acceleration of g. It is obviously not worth while extending the acceleration phase far beyond 0.5c, when the curves begin to considerably diverge. It would consume six times the fuel to achieve 0.99c instead of 0.5c, considering that the engines would have to work at a constant power output all the time, while the benefit of covering a greater distance wouldn’t be that significant.
To obtain the covered distance x after a certain time t, the speed v as given in Eq. 1.20 has to be integrated over time:
Note that the variable tau instead of t is only used to keep the integration consistent and satisfy mathematicians ;-), since t couldn’t denote both the variable and the constant.
There are two special cases which also become obvious in Fig. 1.18: For small speeds Eq. 1.21 becomes the Newtonian formula for accelerated movement . Therefore the two curves for non-relativistic and relativistic distances are almost identical during the first few months (and ). If the theoretical non-relativistic speed gt exceeds c (which would be the case after several years of acceleration), the formula may be approximated with the simple linear relation , and the according graph is a straight line. This is evident, since we can assume the ship has actually reached a speed close to c, and the effective acceleration is marginal. A distance of one light year would then be bridged in slightly more than a year.
If the acceleration is suspended at 0.5c after the aforementioned 409 days, the distance would be 4.84 trillion km which is 0.51 ly. With a constant speed of 0.5c for another days the ship would cover another 2.87 ly, so the total distance would be 3.38 ly. On the other hand, after additional 2100 days of acceleration to 0.99c our ship has bridged 56,5 trillion km, or 5.97 ly. As we could expect, the constant acceleration to twice the speed is not as efficient as in the deep-sublight region where it should have doubled the covered distance.
A maximum speed of no more than 0.5c seems useful, at least for “close” destinations such as Proxima Centauri. With an acceleration of the flight plan could look as follows:
|Acceleration||Speed||Distance||Earth time||Ship time|
|Acceleration @ g||0 to 0.5c||0.51 ly||1.12 years||0.96 years|
|Constant speed||0.5c||2.98 ly||5.96 years||5.12 years|
|Deceleration @ -g||0.5c to 0||0.51 ly||1.12 years||0.96 years|
|Total||-||4 ly||8.20 years||7.04 years|
The table already includes a correction of an “error” in the above considerations which referred to the time t as it elapses on Earth. The solution of the twin paradox revealed that the space traveler who leaves Earth at a certain speed and stops at the destination changes their frame of reference each time, no matter whether or not we take into account the effects of the acceleration phases. This is why the special relativistic time dilatation will become asymmetric. As the space traveler returns to Earth’s frame of reference—either by returning to Earth or by landing on Proxima Centauri which can be supposed to be roughly in the same frame of reference as Earth—he will have aged less than his twin on Earth. During his flight his ship time t* elapses slower than the time t in Earth’s frame of reference:
Eq. 1.22 is valid if the speed v is constant. During his constant speed period of 5.96 years in Earth’s frame of reference the space traveler’s clock would proceed by 5.16 years. In case of an acceleration or deceleration we have to switch to infinitesimal time periods dt and dt* and replace the constant velocity v with v(t) as in Eq. 1.20. This modified equation has to be integrated over the Earth time t to obtain t*:
The function arsinh is called “area sinus hyperbolicus”.
The space traveler would experience 0.96 years during the acceleration as well as the deceleration. The times are summarized in Tab. 1.1, yielding a total time experienced by the ship passenger of 7.04 years, as opposed to 8.20 years in the “resting” frame of reference on Earth.
Traveling to the edge of the universe The prospect of slowing down time as the ship approaches c offers fascinating possibilities of space travel even without FTL drive. The question is how far an STL starship could travel within a passenger’s lifetime, assuming a constant acceleration of all the time. Provided there is a starship with virtually unlimited fuel, the following theory would have to be proven: If the space traveler continued acceleration for many years, his speed would very slowly approach, but never exceed c, if observed from Earth. This wouldn’t take him very far in his lifetime. However, according to Eq. 1.23 time slows down more and more, and this is the decisive effect. We might want to correct the above equations with the slower ship time t* instead of the Earth time t. We obtain the ship speed v* and distance x* if we apply Eq. 1.23 to Eq. 1.20 and Eq. 1.21, respectively.
Note that the term is always smaller than 1, so that the measured speed always remains slower than c. On the other hand, x* may rise to literally astronomical values. Fig. 1.19 depicts the conjectural travel to the edge of the universe, roughly 10 billion light years away, which could be accomplished in only 25 ship years! The traveler could even return to Earth which would require another 25 years; but there wouldn’t probably be much left of Earth since the time elapsed in Earth’s frame of reference would sum up to 10 billion years, obviously the same figure as the bridged distance in light years.
Apart from the objection that there would be hardly unlimited fuel for the travel, the above considerations assume a static universe. The real universe would further expand, and the traveler could never reach its edge which is probably moving at light speed.
The non-relativistic relation of thrust and speed was discussed in section 1.1.5. If we take into account relativistic effects, we see that at a constant thrust the effective acceleration will continually decrease to zero as the speed approaches c. The simple relation is not valid anymore and has to be replaced with Eq. 1.20. Thus, we have to rewrite the fuel equation as follows:
The two masses m₀ and m₁ still denote non-relativistic rest masses of the ship before and after the acceleration, respectively. Achieving would require not much more fuel than in the non-relativistic case, the payload could still be 56% of the total mass compared to 60%. This would be possible, provided that a matter/antimatter power source is available and the power conversion efficiency is 100%. If the aspired speed were 0.99c, the ship would have an unrealistic fuel share of 97%. The flight to the edge of the universe (24 ship years at a constant apparent acceleration of g) would require a fuel mass of 56 billion times the payload which is beyond every reasonable limitations, of course.
If we assume that the ship first accelerates to 0.5 and then decelerates to zero on the flight to Proxima Centauri, we will get a still higher fuel share. Considering that Eq. 1.26 only describes the acceleration phase, the deceleration would have to start at a mass of m₁, and end at a still smaller mass of m₂. Taking into account both phases, we will easily see that the two mass factors have to be multiplied:
This would mean that the payload without refueling could be only 31% for . For the ship would consist of virtually nothing but fuel. Just for fun, flying to the edge of the universe and landing somewhere out there would need a fuel of tons, if the payload is one ton (Earth’s mass: tons). | http://www.st-minutiae.com/articles/warptheory/chapter1.html |
4.0625 | A scuba diver uses his waterproof flashlight to shine a beam of light so that it strikes the surface of the water at an angle of incidence θi. Use Snell’s law to find the angle of incidence that would give an angle of refraction for the refracted ray to be directed right along the surface, and show that θi is the same as the critical angle for total internal reflection.
1 Answer | Add Yours
Snell's Law states
where n is the index of refraction And `\theta` is the angle of incidence or refraction.
For total internal reflection to occur, the beam must go from a higher index to a lower index of refraction.
If a refracted ray is directed along the surface, the angle of refraction is 90. Let n_1 be the water and n_2 be air. So the equation becomes
`:. \theta_i = sin^(-1)(n_2/n_1)`
`n_1 gt n_2`
If we look at this, we see the two materials dictate this maximum angle of incidence.
Now let's use some numbers to see if this is true. n for air is 1.0. n for water is 1.3. thus this angle becomes 50.3.
But what if we use a larger angle of incidence? If this is the critical angle , it should be apparant in the result. Let's use 60.
Trying to solve that produces no solution. Which means there is not a refracted ray. So the maximum angle of refraction is 90 and the angle of incidence that causes this is the largest angle possible before total internal reflection.
We’ve answered 301,944 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/scuba-diver-uses-his-waterproof-flashlight-shine-430654 |
4.1875 | If you like us, please share us on social media, tell your friends, tell your professor or consider building or adopting a Wikitext for your course.
A substance is a sample of matter whose physical and chemical properties are the same throughout the sample because the matter has a constant composition. It is common to see substances changing from one state of matter to another. To differentiate the states of matter at least at a particle level, we look at the behavior of the particles within the substance. When substances change state, it is because the spacing between the particles of the substances is changing due to a gain or loss of energy. For example, we all have probably observed that water can exist in three forms with different characteristic ways of behaving: the solid state (ice), liquid state (water), and gaseous state (water vapor and steam). Due to water's prevalence, we use it to exemplify and describe the three different states of matter. As ice is heated and the particles of matter that make up water gain energy, eventually the ice melts in to water that eventually boils and turns into steam.
Before we examine the states of matter, we will consider some ways samples of matter have been classified by those who have studied how matter behaves.
Evidence suggests that substances are made up of smaller particles that are ordinarily moving around. Some of those particles of matter can be split into smaller units using fairly strong heat or electricity into smaller rather uniform bits of matter called atoms. Atoms are the building blocks of elements. Elements are all those substances that have not ever been decomposed or separated into any other substances through chemical reactions, by the application of heat, or by attempting to force an direct electric current through the sample. Atoms in turn have been found to be made up of yet smaller units of matter called electrons, protons, and neutrons.
Figure 1: Breakdown of an atom. An illustration of the helium atom, depicting the nucleus (pink) and the electron cloud distribution (black). The nucleus (upper right) in helium-4 is in reality spherically symmetric and closely resembles the electron cloud, although for more complicated nuclei this is not always the case. The black bar is one angstrom (10−10 m or 100 pm). Image used with permissin from Wikipedia.
Elements can be arranged into what is called the periodic table of elements based on observed similarities in chemical and physical properties among the different elements. When atoms of two or more elements come together and bond, a compound is formed. The compound formed can later be broken down into the pure substances that originally reacted to form it.
Compounds such as water are composed of smaller units of bonded atoms called molecules. Molecules of a compound are composed of the same proportion of elements as the compound as a whole since they are the smallest units of that compound. For example, every portion of a sample of water is composed of water molecules. Each water molecule contains two hydrogen atoms and one oxygen atom, and so water as a whole has, in a combined state, twice as many hydrogen atoms as oxygen atoms..
Water can still consist of the same molecules, but its physical properties may change. For instance, water at a temperature below 0° Celsius (32° Fahrenheit) is ice, whereas water above the temperature of 100° C (212° F) is a gas, water vapor. When matter changes from one state to another, temperature and pressure may be involved in the process and the density and other physical properties change. The temperature and pressure exerted on a sample of matter determines the resulting form of that the matter takes, whether solid, liquid, or gas.
Since the properties of compounds and elements are uniform, they are classified as substances. When two or more substances are mixed together, the result is called a mixture. Mixtures can be classified into two main categories: homogeneous and heterogeneous. A homogeneous mixture is one in which the composition of its constituents are uniformly mixed throughout. A homogeneous mixture in which on substance, the solute, dissolves completely in another substance, the solvent, may also be called a solution. Usually the solvent is a liquid, however the solute can be either a liquid, solid, or a gas. In a homogeneous solution, the particles of solute are spread evenly among the solvent particles and the extremely small particles of solute cannot be separated from the solvent by filtration through filter paper because the spaces between paper fibers are much greater than the size of the solute and solvent particles. Other examples of homogeneous mixtures include sugar water, which is the mixture of sucrose and water, and gasoline, which is a mixture of dozens of compounds.
A heterogeneous mixture is a nonuniform mixture in which the components separate and the composition varies. Unlike the homogeneous mixture, heterogeneous mixtures can be separated through physical processes. An example of a physical process used is filtration, which can easily separate the sand from the water in a sand-water mixture by using a filter paper. Some more examples of heterogeneous mixtures include salad dressing, rocks, and oil and water mixtures. Heterogeneous mixtures involving at least one fluid are also called suspension mixtures and separate if they are left standing long enough. Consider the idea of mixing oil and water together. Regardless of the amount of time spent shaking the two together, eventually oil and water mixtures will separate with the oil rising to the top of the mixture due to its lower density.
Mixtures that fall between a solution and a heterogeneous mixture are called colloidal suspensions (or just colloids). A mixture is considered colloidal if it typically does not spontaneously separate or settle out as time passes and cannot be completely separated by filtering through a typical filter paper. It turns out that a mixture is colloidal in its behavior if one or more of its dimensions of length, width, or thickness is in the range of 1-1000 nm. A colloidal mixture can also be recognized by shining a beam of light through the mixture. If the mixture is colloidal, the beam of light will be partially scattered by the suspended nanometer sized particles and can be observed by the viewer. This is known as the Tyndall effect. In the case of the Tyndall effect, some of the light is scattered since the wavelengths of light in the visible range, about 400 nm to 700 nm, are encountering suspended colloidal sized particles of about the same size. In contrast, if the beam of light were passed through a solution, the observer standing at right angles to the direction of the beam would see no light being reflected from either the solute or solvent formula units that make up the solution because the particles of solute and solvent are so much smaller than the wavelength of the visible light being directed through the solution.
Figure 2: Flow chart for matter breakdown.
Most substances are naturally found as mixtures, therefore it is up to the chemist to separate them into their natural components. One way to remove a substance is through the physical property of magnetism. For example, separating a mixture of iron and sulfur could be achieved because pieces of iron would be attracted to a magnet placed into the mixture, removing the iron from the remaining sulfur. Filtration is another way to separate mixtures. Through this process, a solid is separated from a liquid by passing through a fine pored barrier such as filter paper. Sand and water can be separated through this process, in which the sand would be trapped behind the filter paper and the water would strain through. Another example of filtration would be separating coffee grounds from the liquid coffee through filter paper. Distillation is another technique to separate mixtures. By boiling a solution of a non-volatile solid disolved in a liquid in a flask, vapor from the lower boiling point solvent can be driven off from the solution by heat, be condensed back into the liquid phase as it comes in contact with cooler surfaces, and be collected in another container. Thus a solution such as this may be separated into its original components, with the solvent collected in a separate flask and the solute left behind in the original distillation flask. An example of a solution being separated through distillation would be the distillation of a solution of copper(II) sulfate in water, in which the water would be boiled away and collected and the copper(II) sulfate would remain behind in the disllation flask.
Figure 3: The picture above depicts the equipment needed for a distillation process. The homogeneous mixture starts out in the left flask and is boiled. The vapor then travels down chilled tube on the right and condenses back into a liquid and drips into the flask.
Everything that is familiar to us in our daily lives - from the land we walk on, to the water we drink and the air we breathe - is based upon the states of matter called gases, liquids, and solids.
When the temperature of a liquid is lowered to the freezing point of the substance (for water the freezing point is 0oC), the movement of the particles slows with the spacing between the particles changing until the attractions between the particles lock the particles into a solid form. At the freezing point, the particles are closely packed together and tend to block the motions of each other. The attractions between the particles hold the particles tightly together so that the entire ensemble of particles takes on a fixed shape. The volume of the solid is constant and the shape of a solid is constant unless deformed by a sufficiently strong external force. (Solids are thus unlike liquids whose particles are slightly less attracted to one another because the particles of a liquid are a bit further apart than those in the corresponding solid form of the same substance.) In a solid the particles remain in a relatively fixed positions but continue to vibrate. The vibrating particles in a solid do not completely stop moving and can slowly move into any voids that exist within the solid.
Figure 4: The diagram on the left represents a solid whose constituent particles are arranged in an orderly array, a crystal lattice. The image on the right is a ice cube. It has changed from liquid into a solid as a result of absorbing energy from its warmer environment.
When the temperature of a sample increases above the melting point of a solid, that sample can be found in the liquid state of matter. The particles in the liquid state are much closer together than those in the gaseous state, and still have a quite an attraction for each other as is apparent when droplets of liquid form. In this state, the weak attractive forces within the liquid are unable to hold the particles into a mass with a definite shape. Thus a liquid's shape takes on the shape of any particular container that holds it. A liquid has a definite volume but not a definite shape. Compared to to the gaseous state there is less freedom of particle movement in the liquid state since the moving particles frequently are colliding with one another, and slip and slide over one another as a result of the attractive forces that still exist between the particles, and hold the particles of the liquid loosely together. At a given temperature the volume of the liquid is constant and its volume typically only varies slightly with changes in temperature.
Figure 5: The diagram on the left represents a container partially filled with a liquid. The image on the right is of water being poured out of a glass. This shows that liquid water has no particular shape of its own.
In the gas phase, matter does not have a fixed volume or shape. This occurs because the molecules are widely separated with the spaces between the particles typically around ten times further apart in all three spatial directions, making the gas around 1000 times less dense than the corresponding liquid phase at the same temperature. (A phase is a uniform portion of mater.) As the temperature of a gas is increased, the particles to separate further from each other and move at faster speeds. The particles in a gas move in a rather random and independent fashion, bouncing off each other and the walls of the container. Being so far apart from one another, the particles of a real gas only weakly attract each other such that the gas has no ability to have a shape of its own. The extremely weak forces acting between the particles in a gas and the greater amount of space for the particles to move in results in almost independent motion of the moving, colliding particles. The particles freely range within any container in which they are put, filling its entire volume with the net result that the sides of the container determine the shape and volume of gas. If the container has an opening, the particles heading in the direction of the opening will escape with the result that the gas as a whole slowly flows out of the container.
Figure 6: The image on the left represents an enclosed container filled with gas. The images are meant to suggest that the gas particles in the container are moving freely and randomly in myriad directions.The image on the right shows condensing water forming from the water vapor that escaped from the container.
Besides of the three classical states of matter, there are many other states of matter that share characteristics of one more of the classical states of matter. Most of these states of matter can be put into three categories according to the degrees in varying temperature. At room temperature, the states of matters include liquid crystal, amorphous solid, and magnetically ordered states. At low temperatures the states of matter include superconductors, superfluids, and Bose-Einstein condensate state of matter. At high temperatures the states of matter include, plasma and Quark-gluon plasma. These other states of matter are not typically studied in general chemistry. | http://chemwiki.ucdavis.edu/Analytical_Chemistry/Qualitative_Analysis/Classification_of_Matter |
4.03125 | In biology, regeneration is the process of renewal, restoration, and growth that makes genomes, cells, organisms, and ecosystems resilient to natural fluctuations or events that cause disturbance or damage. Every species is capable of regeneration, from bacteria to humans. Regeneration can either be complete where the new tissue is the same as the lost tissue, or incomplete where after the necrotic tissue comes fibrosis. At its most elementary level, regeneration is mediated by the molecular processes of gene regulation. Regeneration in biology, however, mainly refers to the morphogenic processes that characterize the phenotypic plasticity of traits allowing multi-cellular organisms to repair and maintain the integrity of their physiological and morphological states. Above the genetic level, regeneration is fundamentally regulated by asexual cellular processes. Regeneration is different from reproduction. For example, hydra perform regeneration but reproduce by the method of budding.
The hydra and the planarian flatworm have long served as model organisms for their highly adaptive regenerative capabilities. Once wounded, their cells become activated and start to remodel tissues and organs back to the pre-existing state. The Caudata ("urodeles"; salamanders and newts), an order of tailed amphibians, is possibly the most adept vertebrate group at regeneration, given their capability of regenerating limbs, tails, jaws, eyes and a variety of internal structures. The regeneration of organs is a common and widespread adaptive capability among metazoan creatures. In a related context, some animals are able to reproduce asexually through fragmentation, budding, or fission. A planarian parent, for example, will constrict, split in the middle, and each half generates a new end to form two clones of the original. Echinoderms (such as the starfish), crayfish, many reptiles, and amphibians exhibit remarkable examples of tissue regeneration. The case of autotomy, for example, serves as a defensive function as the animal detaches a limb or tail to avoid capture. After the limb or tail has been autotomized, cells move into action and the tissues will regenerate. Limited regeneration of limbs occurs in most fishes and salamanders, and tail regeneration takes place in larval frogs and toads (but not adults). The whole limb of a Salamander or a Triton will grow again and again after amputation. In reptiles, Chelonians, crocodiles and snakes are unable to regenerate lost parts. But many (not all) kinds of lizards, geckos and Iguanas possess regeneration capacity in a high degree. Usually, it involves dropping a section of their tail and regenerating it as part of a defense mechanism. While escaping a predator, if the predator catches the tail, it will disconnect.
Ecosystems can be regenerative. Following a disturbance, such as a fire or pest outbreak in a forest, pioneering species will occupy, compete for space, and establish themselves in the newly opened habitat. The new growth of seedlings and community assembly process is known as regeneration in ecology.
Cellular molecular fundamentals
Pattern formation in the morphogenesis of an animal is regulated by genetic induction factors that put cells to work after damage has occurred. Neural cells, for example, express growth-associated proteins, such as GAP-43, tubulin, actin, an array of novel neuropeptides, and cytokines that induce a cellular physiological response to regenerate from the damage. Many of the genes that are involved in the original development of tissues are reinitialized during the regenerative process. Cells in the primordia of zebrafish fins, for example, express four genes from the homeobox msx family during development and regeneration.
"Strategies include the rearrangement of pre-existing tissue, the use of adult somatic stem cells and the dedifferentiation and/or transdifferentiation of cells, and more than one mode can operate in different tissues of the same animal. All these strategies result in the re-establishment of appropriate tissue polarity, structure and form.":873 During the developmental process, genes are activated that serve to modify the properties of cell as they differentiate into different tissues. Development and regeneration involves the coordination and organization of populations cells into a blastema, which is "a mound of stem cells from which regeneration begins." Dedifferentiation of cells means that they lose their tissue-specific characteristics as tissues remodel during the regeneration process. This should not be confused with the transdifferentiation of cells which is when they lose their tissue-specific characteristics during the regeneration process, and then re-differentiate to a different kind of cell.
Arthropods are known to regenerate appendages following loss or autotomy. Regeneration among arthropods is restricted by molting such that hemimetabolous insects are capable of regeneration only until their final molt whereas most crustaceans can regenerate throughout their lifetimes. Molting cycles are hormonally regulated in arthropods, although premature molting can be induced by autotomy. Mechanisms underlying appendage regeneration in hemimetabolous insects and crustaceans is highly conserved. During limb regeneration species in both taxa form a blastema following autotomy with regeneration of the excised limb occurring during proecdysis. Arachnids, including scorpions, are known to regenerate their venom, although the content of the regenerated venom is different than the original venom during its regeneration, as the venom volume is replaced before the active proteins are all replenished.
Many annelids are capable of regeneration. For example, Chaetopterus variopedatus and Branchiomma nigromaculata can regenerate both anterior and posterior body parts after latitudinal bisection. The relationship between somatic and germline stem cell regeneration has been studied at the molecular level in the annelid Capitella teleta. Leeches, members of the Annelid subclass Hirudinid, are incapable of segmental regeneration. Furthermore, their close relatives, the branchiobdellids, are also incapable of segmental regeneration. However, certain individuals, like the lumbriculids, can regenerate from only a few segments. Segmental regeneration in these animals is epimorphic and occurs through blastema formation. Segmental regeneration has been gained and lost during annelid evolution, as seen in oligochaetes, where head regeneration has been lost three separate times.
Along with epimorphosis, some polychaetes like Sabella pavonina experience morphallactic regeneration. Morphallaxis involves the de-differentiation, transformation, and re-differentation of cells to regenerate tissues. How prominent morphallactic regeneration is in oligochaetes is currently not well understood. Although relatively under-reported, it is possible that morphallaxis is a common mode of inter-segment regeneration in annelids. Following regeneration in L. variegatus, past posterior segments sometimes become anterior in the new body orientation, consistent with morphallaxis.
Following amputation, most annelids are capable of sealing their body via rapid muscular contraction. Constriction of body muscle can lead to infection prevention. In certain species, like Limnodrilus, autolysis can be seen within hours after amputation in the ectoderm and mesoderm. Amputation is also thought to cause a large migration of cells to the injury site, and these form a wound plug.
Tissue regeneration is widespread among echinoderms and has been well documented in starfish (Asteroidea), sea cucumbers (Holothuroidea), and sea urchins (Echinoidea). Appendage regeneration in echinoderms has been studied since at least the 19th century. In addition to appendages, some species can regenerate internal organs and parts of their central nervous system. In response to injury starfish can autotomize damaged appendages. Autotomy is the self-amputation of a body part, usually an appendage. Depending on severity, starfish will then go through a four-week process where the appendage will be regenerated. Some species must retain mouth cells in order to regenerate an appendage, due to the need for energy. The first organs to regenerate, in all species documented to date, are associated with the digestive tract. Thus, most of our knowledge about visceral regeneration in holothurians concerns this system.
Regeneration research using Planarians began in the late 1800s and was popularized by T.H. Morgan at the beginning of the 19th century. Alejandro Sanchez-Alvarado and Philip Newmark transformed planarians into a model genetic organism in the beginning of the 20th century to study the molecular mechanisms underlying regeneration in these animals. Planarians exhibit an extraordinary ability to regenerate lost body parts. For example, a planarian split lengthwise or crosswise will regenerate into two separate individuals. In one experiment, T.H. Morgan found that a piece corresponding to 1/279th of a planarian or a fragment with as few as 10,000 cells can successfully regenerate into a new worm within one to two weeks. After amputation, stump cells form a blastema formed from neoblasts, pluripotent cells found throughout the planarian body. New tissue grows from neoblasts with neoblasts comprising between 20 to 30% of all planarian cells. Recent work has confirmed that neoblasts are totipotent since one singe noblest can regenerate an entire irradiated animal that has been rendered incapable of regeneration. In order to prevent starvation a planarian will use their own cells for energy, this phenomenon is known as de-growth.
Limb regeneration in the axolotl and newt has been extensively studied and researched. Urodele amphibians, such as salamanders and newts, display the highest regenerative ability among tetrapods. As such, they can fully regenerate their limbs, tail, jaws, and retina via epimorphic regeneration leading to functional replacement of new tissue. Salamander limb regeneration occurs in two main steps. First, the local cells dedifferentiate at the wound site into progenitor to form a blastema Second, the blastemal cells will undergo proliferation, patterning, differentiation and growth using similar genetic mechanisms that deployed during embryonic development. Ultimately, blastemal cells will generate all the cells for the new structure.
After amputation, the epidermis migrates to cover the stump in 1–2 hours, forming a structure called the wound epithelium (WE). Epidermal cells continue to migrate over the WE, resulting in a thickened, specialized signaling center called the apical epithelial cap (AEC). Over the next several days there are changes in the underlying stump tissues that result in the formation of a blastema (a mass of dedifferentiated proliferating cells). As the blastema forms, pattern formation genes – such as HoxA and HoxD – are activated as they were when the limb was formed in the embryo. The positional identity of the distal tip of the limb (i.e. the autopod, which is the hand or foot) is formed first in the blastema. Intermediate positional identities between the stump and the distal tip are then filled in through a process called intercalation. Motor neurons, muscle, and blood vessels grow with the regenerated limb, and reestablish the connections that were present prior to amputation. The time that this entire process takes varies according to the age of the animal, ranging from about a month to around three months in the adult and then the limb becomes fully functional. Researchers at Australian Regenerative Medicine Institute at Monash University, have published that when macrophages, which eat up material debris, were removed, salamanders lost their ability to regenerate and formed scarred tissue instead.
In spite of the historically few researchers studying limb regeneration, remarkable progress has been made recently in establishing the neotenous amphibian the axolotl (Ambystoma mexicanum) as a model genetic organism. This progress has been facilitated by advances in genomics, bioinformatics, and somatic cell transgenesis in other fields, that have created the opportunity to investigate the mechanisms of important biological properties, such as limb regeneration, in the axolotl. The Ambystoma Genetic Stock Center (AGSC) is a self-sustaining, breeding colony of the axolotl supported by the National Science Foundation as a Living Stock Collection. Located at the University of Kentucky, the AGSC is dedicated to supplying genetically well-characterized axolotl embryos, larvae, and adults to laboratories throughout the United States and abroad. An NIH-funded NCRR grant has led to the establishment of the Ambystoma EST database, the Salamander Genome Project (SGP) that has led to the creation of the first amphibian gene map and several annotated molecular data bases, and the creation of the research community web portal.
Anurans can only regenerate their limbs during embryonic development. Once the limb skeleton has developed regeneration does not occur (Xenopus can grow a cartilaginous spike after amputation). Reactive oxygen species (ROS) appear to be required for a regeneration response in the anuran larvae. ROS production is essential to activate the Wnt signaling pathway, which has been associated with regeneration in other systems. Limb regeneration in salamanders occurs in two major steps. First, adult cells de-differentiate into progenitor cells which will replace the tissues they are derived from. Second, these progenitor cells then proliferate and differentiate until they have completely replaced the missing structure.
Hydra is a genus of freshwater polyp in the phylum Cnidaria with highly proliferative stem cells that gives them the ability to regenerate their entire body. Any fragment larger than a few hundred epithelial cells that is isolated from the body has the ability to regenerate into a smaller version of itself. The high proportion of stem cells in the hydra supports its efficient regenerative ability.
Regeneration among hydra occurs as foot regeneration arising from the basal part of the body, and head regeneration, arising from the apical region. Regeneration tissues that are cut from the gastric region contain polarity, which allows them to distinguish between regenerating a head in the apical end and a foot in the basal end so that both regions are present in the newly regenerated organism. Head regeneration requires complex reconstruction of the area, while foot regeneration is much simpler, similar to tissue repair. In both foot and head regeneration, however, there are two distinct molecular cascades that occur once the tissue is wounded: early injury response and a subsequent, signal-driven pathway of the regenerating tissue that leads to differentiation. This early-injury response includes epithelial cell stretching for wound closure, the migration of interstitial progenitors towards the wound, cell death, phagocytosis of cell debris, and reconstruction of the extracellular matrix.
Regeneration in hydra has been defined as morphallaxis, the process where regeneration results from remodeling of existing material without cellular proliferation. If a hydra is cut into two pieces, the remaining severed sections form two fully functional and independent hydra, approximately the same size as the two smaller severed sections. This occurs through the exchange and rearrangement of soft tissues without the formation of new material.
Owing to a limited literature on the subject, birds are believed to have very limited regenerative abilities as adults. Some studies on roosters have suggested that birds can adequately regenerate some parts of the limbs and depending on the conditions in which regeneration takes place, such as age of the animal, the inter-relationship of the injured tissue with other muscles, and the type of operation, can involve complete regeneration of some musculoskeletal structure. Werber and Goldschmidt (1909) found that the goose and duck were capable of regenerating their beaks after partial amputation and Sidorova (1962) observed liver regeneration via hypertrophy in roosters. Birds are also capable of regenerating the hair cells in their cochlea following noise damage or ototoxic drug damage. Despite this evidence, contemporary studies suggest reparative regeneration in avian species is limited to periods during embryonic development. An array of molecular biology techniques have been successful in manipulating cellular pathways known to contribute to spontaneous regeneration in chick embryos. For instance, removing a portion of the elbow joint in a chick embryo via window excision or slice excision and comparing joint tissue specific markers and cartilage markers showed that window excision allowed 10 out of 20 limbs to regenerate and expressed joint genes similarly to a developing embryo. In contrast, slice excision did not allow the joint to regenerate due to the fusion of the skeletal elements seen by an expression of cartilage markers.
Similar to the physiological regeneration of hair in mammals, birds can regenerate their feathers in order to repair damaged feathers or to attract mates with their plumage. Typically, seasonal changes that are associated with breeding seasons will prompt a hormonal signal for birds to begin regenerating feathers. This has been experimentally induced using thyroid hormones in the Rhode Island Red Fowls.
Mammals are capable of cellular and physiological regeneration, but have generally poor reparative regenerative ability across the group. Examples of physiological regeneration in mammals include epithelial renewal (e.g., skin and intestinal tract), red blood cell replacement, antler regeneration and hair cycling. Male deer lose their antlers annually during the months of January to April then through regeneration are able to regrow them as an example of physiological regeneration. A deer antler is the only appendage of a mammal that can be regrown every year. While reparative regeneration is a rare phenomenon in mammals, it does occur. A well-documented example is regeneration of the digit tip distal to the nail bed. Reparative regeneration has also been observed in rabbits, pikas and African spiny mice. In 2012, researchers discovered that at least two species of African Spiny Mice, Acomys kempi and Acomys percivali, are capable of completely regenerating the autotomically released or otherwise damaged tissue. These species can regrow hair follicles, skin, sweat glands, fur and cartilage.
Adult mammals have limited regenerative capacity compared to most vertebrate embryos/larvae, adult salamanders and fish. But the regeneration therapy approach of Robert O. Becker, using electrical stimulation, has shown promising results for rats and mammals in general.
The MRL mouse is a strain of mouse that exhibits remarkable regenerative abilities for a mammal. By comparing the differential gene expression of scarless healing MRL mice and a poorly-healing C57BL/6 mouse strain, 36 genes have been identified that are good candidates for studying how the healing process differs in MRL mice and other mice. Study of the regenerative process in these animals is aimed at discovering how to duplicate them in humans, such as deactivation of the p21 gene.
The regenerative ability of MRL mice does not, however, protect them against myocardial infarction; heart regeneration in adult mammals (neocardiogenesis) is limited, because heart muscle cells are nearly all terminally differentiated. MRL mice show the same amount of cardiac injury and scar formation as normal mice after a heart attack. However, recent studies provide evidence that this may not always be the case, and that MRL mice can regenerate after heart damage.
The regrowth of lost tissues or organs in the human body is being researched. Some tissues such as skin regrow quite readily; others have been thought to have little or no capacity for regeneration, but ongoing research suggests that there is some hope for a variety of tissues and organs. Human organs that have been regenerated include the bladder, vagina and the penis.
As are all metazoans, humans are capable of physiological regeneration (i.e. the replacement of cells during homeostatic maintenance that does not necessitate injury). For example, the regeneration of red blood cells via erythropoiesis occurs through the maturation of erythrocytes from hematopoietic stem cells in the bone marrow, their subsequent circulation for around 90 days in the blood stream, and their eventual cell-death in the spleen. Another example of physiological regeneration is the sloughing and rebuilding of a functional endometrium during each menstrual cycle in females in response to varying levels of circulating estrogen and progesterone.
However, human are limited in their capacity for reparative regeneration, which occurs in response to injury. One of the most studied regenerative responses in humans is the hypertrophy of the liver following liver injury. For example, the original mass of the liver is re-established in direct proportion to the amount of liver removed following partial hepatectomy, which indicates that signals from the body regulate liver mass precisely, both positively and negatively, until the desired mass is reached. This response is considered cellular regeneration (a form of compensatory hypertrophy) where the function and mass of the liver is regenerated through the proliferation of existing mature hepatic cells (mainly hepatocytes), but the exact morphology of the liver is not regained. This process is driven by growth factor and cytokine regulated pathways.
Adult neurogenesis is also a form of cellular regeneration. For example, hippocampal neuron renewal occurs in normal adult humans at an annual turnover rate of 1.75% of neurons. Cardiac myocyte renewal has been found to occur in normal adult humans, and at a higher rate in adults following acute heart injury such as infarction. Even in adult myocardium following infarction, proliferation is only found in around 1% of myocytes around the area of injury, which is not enough to restore function of cardiac muscle. However, this may be an important target for regenerative medicine as it implies that regeneration of cardiomyocytes, and consequently of myocardium, can be induced.
Another example of reparative regeneration in humans is fingertip regeneration, which occurs after phalange amputation distal to the nail bed (especially in children) and rib regeneration, which occurs following osteotomy for scoliosis treatment (though usually regeneration is only partial and may take up to 1 year).
The ability and degree of regeneration in reptiles differs among the various species, but the most notable and well-studied occurrence is tail-regeneration in lizards. In addition to lizards, regeneration has been observed in the tails and maxillary bone of crocodiles and adult neurogenesis has also been noted. Tail regeneration has never been observed in snakes. Lizards possess the highest regenerative capacity as a group. Following autonomous tail loss, epimorphic regeneration of a new tail proceeds through a blastema-mediated process that results in a functionally and morphologically similar structure. Tail regeneration in lizards presents an interesting case study for the evolution of regenerative abilities as regenerative ability among the 3,300 extant species varies greatly among them.
Studies have shown that some chondrichthyans can regenerate rhodopsin by cellular regeneration, micro RNA organ regeneration, teeth physiological teeth regeneration, and reparative skin regeneration. Rhodopsin regeneration has been studied in skates and rays. After complete photo-bleaching, rhodopsin can completely regenerate within 2 hours in the retina. White bamboo sharks can regenerate at least two-thirds of their liver and this has been linked to three micro RNAs, xtr-miR-125b, fru-miR-204, and hsa-miR-142-3p_R-. In one study two thirds of the liver was removed and within 24 hours more than half of the liver had undergone hypertrophy. Leopard sharks routinely replace their teeth every 9–12 days and this is an example of physiological regeneration. This can occur because shark teeth are not attached to a bone, but instead are developed within a bony cavity. It has been estimated that the average shark looses about 30,000 to 40,000 teeth in a lifetime. Some sharks can regenerate scales and even skin following damage. Within two weeks of skin wounding the mucus is secreted into the wound and this initiates the healing process. One study showed that the majority of the wounded area was regenerated within 4 months, but the regenerated area also showed a high degree of variability.
- Birbrair, Alexander; Zhang, Tan; Wang, Zhong-Min; Messi, Maria Laura; Enikolopov, Grigori N.; Mintz, Akiva; Delbono, Osvaldo (2013-03-21). "Role of Pericytes in Skeletal Muscle Regeneration and Fat Accumulation". Stem Cells and Development 22 (16): 2298–2314. doi:10.1089/scd.2012.0647. ISSN 1547-3287. PMC 3730538. PMID 23517218.
- Carlson, B. M. (2007). Principles of Regenerative Biology. Elsevier Inc. p. 400. ISBN 978-0-12-369439-3.
- Gabor, M. H.; Hotchkiss, R. D. (1979). "Parameters governing bacterial regeneration and genetic recombination after fusion of Bacillus subtilis protoplasts". Journal of Bacteriology 137 (3): 1346–1353. PMC 218319. PMID 108246.
- Min, Su; Wang, Song W.; Orr, William (2006). "Graphic general pathology: 2.2 complete regeneration:". Pathology. pathol.med.stu.edu.cn. Retrieved 2012-12-07.
(1) Complete regeneration: The new tissue is the same as the tissue that was lost. After the repair process has been completed, the structure and function of the injured tissue are completely normal
- Min, Su; Wang, Song W.; Orr, William (2006). "Graphic general pathology: 2.3 Incomplete regeneration:". Pathology. pathol.med.stu.edu.cn. Retrieved 2012-12-07.
The new tissue is not the same as the tissue that was lost. After the repair process has been completed, there is a loss in the structure or function of the injured tissue. In this type of repair, it is common that granulation tissue (stromal connective tissue) proliferates to fill the defect created by the necrotic cells. The necrotic cells are then replaced by scar tissue.
- Himeno, Y.; Engelman, R. W.; Good, R. A. (1992). "Influence of calorie restriction on oncogene expression and DNA synthesis during liver regeneration". Proceedings of the National Academy of Sciences of the United States of America 89 (12): 5497–5501. Bibcode:1992PNAS...89.5497H. doi:10.1073/pnas.89.12.5497. PMC 49319. PMID 1608960.
- Bryant, P. J.; Fraser, S. E. (1988). "Wound healing, cell communication, and DNA synthesis during imaginal disc regeneration in Drosophila". Developmental Biology 127 (1): 197–208. doi:10.1016/0012-1606(88)90201-1. PMID 2452103.
- Brokes, J. P.; Kumar, A. "Comparative Aspects of Animal Regeneration". Annu. Rev. Cell Dev. Biol. 28: 525–549. doi:10.1146/annurev.cellbio.24.110707.175336.
- Sánchez, A. A. (2000). "Regeneration in the metazoans: why does it happen?" (PDF). BioEssays 22 (6): 578–590. doi:10.1002/(SICI)1521-1878(200006)22:6<578::AID-BIES11>3.0.CO;2-#. PMID 10842312.
- Reddien, P. W.; Alvarado, A. S. (2004). "Fundamentals of planarian regenerations". Annual Review of Cell and Developmental Biology 20: 725–757. doi:10.1146/annurev.cellbio.20.010403.095114. PMID 15473858.
- Campbell, N. A. Biology (4th ed.). California: The Benjamin Cummings Publishing Company, Inc. p. 1206.
- Wilkie, I. (2001). "Autotomy as a prelude to regeneration in echinoderms". Microscopy Research and Technique 55 (6): 369–396. doi:10.1002/jemt.1185. PMID 11782069.
- Maiorana, V. C. (1977). "Tail autotomy, functional conflicts and their resolution by a salamander". Nature 2265 (5594): 533–535. Bibcode:1977Natur.265..533M. doi:10.1038/265533a0.
- Maginnis, T. L. (2006). "The costs of autotomy and regeneration in animals: a review and framework for future research". Behavioural Ecology 7 (5): 857–872. doi:10.1093/beheco/arl010.
- "UCSB Science Line". scienceline.ucsb.edu. Retrieved 2015-11-02.
- Dietze, M. C.; Clark, J. S. (2008). "Changing the gap dynamics paradigm: Vegetative regenerative control on forest response to disturbance" (PDF). Ecological Monographs 78 (3): 331–347. doi:10.1890/07-0271.1.
- Bailey, J. D.; Covington, W. W. (2002). "Evaluation ponderosa pine regeneration rates following ecological restoration treatments in northern Arizona, USA" (PDF). Forest Ecology and Management 155: 271–278. doi:10.1016/S0378-1127(01)00564-3.
- Fu, S. Y.; Gordon, T. (1997). "The cellular and molecular basis of peripheral nerve regeneration". Molecular Neurobiology 14 (1–2): 67–116. doi:10.1007/BF02740621. PMID 9170101.
- Akimenko, M.; Johnson, S. L.; Wseterfield, M.; Ekker, M. (1996). "Differential induction of four msx homeobox genes during fin development and regeneration in zebrafish" (PDF). Development 121 (2): 347–357. PMID 7768177.
- Alvarado, A. S.; Tsonis, P. A. (2006). "Bridging the regeneration gap: genetic insights from diverse animal models" (PDF). Nat. Rev. Genet. 7 (11): 873–884. doi:10.1038/nrg1923. PMID 17047686.
- Kumar, A.; Godwin, J. W.; Gates, P. B.; Garza-Garcia, A. A.; Brokes, J. P. (2007). "Molecular Basis for the Nerve Dependence of Limb Regeneration in an Adult Vertebrate". Science 318 (5851): 772–7. Bibcode:2007Sci...318..772K. doi:10.1126/science.1147710. PMC 2696928. PMID 17975060.
- Integument, Pigments, and Hormonal Processes: Volume 9: Integument, Pigments and Hormonal Processes. Academic Press. 2012-12-02. ISBN 9780323139229.
- Seifert, Ashley W.; Monaghan, James R.; Smith, Matthew D.; Pasch, Bret; Stier, Adrian C.; Michonneau, François; Maden, Malcolm (2012-05-01). "The influence of fundamental traits on mechanisms controlling appendage regeneration". Biological Reviews 87 (2): 330–345. doi:10.1111/j.1469-185X.2011.00199.x. ISSN 1469-185X.
- Travis, Dorothy F. (1955-02-01). "The Molting Cycle of the Spiny Lobster, Panulirus argus Latreille. II. Pre-Ecdysial Histological and Histochemical Changes in the Hepatopancreas and Integumental Tissues". Biological Bulletin 108 (1): 88–112. doi:10.2307/1538400. JSTOR 1538400.
- Das, Sunetra (2015-11-01). "Morphological, Molecular, and Hormonal Basis of Limb Regeneration across Pancrustacea". Integrative and Comparative Biology 55 (5): 869–877. doi:10.1093/icb/icv101. ISSN 1540-7063. PMID 26296354.
- Hamada, Yoshimasa; Bando, Tetsuya; Nakamura, Taro; Ishimaru, Yoshiyasu; Mito, Taro; Noji, Sumihare; Tomioka, Kenji; Ohuchi, Hideyo (2015-09-01). "Leg regeneration is epigenetically regulated by histone H3K27 methylation in the cricket Gryllus bimaculatus". Development 142 (17): 2916–2927. doi:10.1242/dev.122598. ISSN 0950-1991. PMID 26253405.
- Nisani, Zia; Dunbar, Stephen G.; Hayes, William K. (2007-06-01). "Cost of venom regeneration in Parabuthus transvaalicus (Arachnida: Buthidae)". Comparative Biochemistry and Physiology. Part A, Molecular & Integrative Physiology 147 (2): 509–513. doi:10.1016/j.cbpa.2007.01.027. ISSN 1095-6433. PMID 17344080.
- Bely AE (August 2006). "Distribution of segment regeneration ability in the Annelida". Integr. Comp. Biol. 46 (4): 508–18. doi:10.1093/icb/icj051. PMID 21672762.
- Hill SD (December 1972). "Caudal regeneration in the absence of a brain in two species of sedentary polychaetes". J Embryol Exp Morphol 28 (3): 667–80. PMID 4655324.
- Giani VC, Yamaguchi E, Boyle MJ, Seaver EC (2011). "Somatic and germline expression of piwi during development and regeneration in the marine polychaete annelid Capitella teleta". Evodevo 2: 10. doi:10.1186/2041-9139-2-10. PMC 3113731. PMID 21545709.
- Zoran, Mark J (2001-01-01). Regeneration in Annelids. John Wiley & Sons, Ltd. doi:10.1002/9780470015902.a0022103. ISBN 9780470015902.
- Bely, Alexandra E. (2006-06-30). "Distribution of segment regeneration ability in the Annelida". Integrative and Comparative Biology 46 (4): 508–518. doi:10.1093/icb/icj051. ISSN 1540-7063. PMID 21672762.
- Bely, Alexandra E. (2014-10-01). "Early Events in Annelid Regeneration: A Cellular Perspective". Integrative and Comparative Biology 54 (4): 688–699. doi:10.1093/icb/icu109. ISSN 1540-7063. PMID 25122930.
- Carnevali, M. D. Candia; Bonasoro, F.; Patruno, M.; Thorndyke, M. C. (1998-10-01). "Cellular and molecular mechanisms of arm regeneration in crinoid echinoderms: the potential of arm explants". Development Genes and Evolution 208 (8): 421–430. doi:10.1007/s004270050199. ISSN 0949-944X.
- San Miguel-Ruiz, José E.; Maldonado-Soto, Angel R.; García-Arrarás, José E. (2009-01-01). "Regeneration of the radial nerve cord in the sea cucumber Holothuria glaberrima". BMC developmental biology 9: 3. doi:10.1186/1471-213X-9-3. ISSN 1471-213X. PMC 2640377. PMID 19126208.
- Patruno, M.; Thorndyke, M. C.; Carnevali, M. D. Candia; Bonasoro, F.; Beesley, P. W. (2001-03-01). "Growth factors, heat-shock proteins and regeneration in echinoderms". Journal of Experimental Biology 204 (5): 843–848. ISSN 0022-0949. PMID 11171408.
- Morgan, T.H. (1900). "Regeneration in Planarians". Eingegangen.
- García-Arrarás, J. E.; Greenberg, M. J. (2001-12-15). "Visceral regeneration in holothurians". Microscopy Research and Technique 55 (6): 438–451. doi:10.1002/jemt.1189. ISSN 1059-910X. PMID 11782073.
- Alvarado, Alejandro-Sánchez; Newmark, Phillip A. (1998). "The use of planarians to dissect the molecular basis of metazoan regeneration". Wound repair and regeneration 6 (4): S413.
- Montgomery, JR; Coward, SJ (1974). "On the minimal size of a planarian capable of regeneration". Trans Am Mic Sci. 93: 386–391.
- "The history and enduring contributions of planarians to the study of animal regeneration". Wiley Interdisciplinary Reviews: Developmental Biology 2: 301–326. doi:10.1002/wdev.82.
- Wagner, D. E.; Wang, I. E.; Reddien, P. W. "Clonogenic Neoblasts Are Pluripotent Adult Stem Cells That Underlie Planarian Regeneration". Science 2011: 811–16.
- Reddien, P; Sánchez Alvarado, A (2004). "Fundamentals of planarian regeneration". Annu Rev Cell Dev Biol. 20: 725–757. doi:10.1146/annurev.cellbio.20.010403.095114.
- Brockes, Kumar, Velloso, Jeremy, Anoop, Christina (2001). Regeneration as an Evolutionary Variable. Anatomical Society of Great Britain and Ireland.
- Brockes, Kumar, Jeremy, Anoop (August 2002). "Plasticity and reprogramming of differentiated cells in amphibian regeneration". Nature Reviews: Molecular Cellular Biology. Check date values in:
- Iten, Bryant, Laurie, Susan (December 1973). "Forelimb regeneration from different levels of amputation in the newt,Notophthalmus viridescens: Length, rate, and stages". University of California Irvine. Check date values in:
- Endo, Bryant, Gardiner (6/1/2004). "A stepwise model system for limb regeneration". NCBI. Check date values in:
- Brockes, Kumar, Velloso, Jeremy, Anoop, Christina (August 2001). "Regeneration as an evolutionary variable". Journal of Anatomy. Check date values in:
- Satoh, A; Bryant, SV; Gardiner, DM (2012). "Nerve signaling regulates basal keratinocyte proliferation in the blastema apical epithelial cap in the axolotl (Ambystoma mexicanum).". Dev Biol 15 (366): 374–381. doi:10.1016/j.ydbio.2012.03.022. PMID 22537500.
- Christensen, RN; Tassava, RA (2000). "Apical epithelial cap morphology and fibronectin gene expression in regenerating axolotl limbs.". Dev Dyn 217 (2): 216–224. doi:10.1002/(sici)1097-0177(200002)217:2<216::aid-dvdy8>3.0.co;2-8. PMID 10706145.
- Bryant SV, Endo T, Gardiner DM (2002). "Vertebrate limb regeneration and the origin of limb stem cells". The International journal of developmental biology 46 (7): 887–96. PMID 12455626.
- Mullen LM, Bryant SV, Torok MA, Blumberg B, Gardiner DM (November 1996). "Nerve dependency of regeneration: the role of Distal-less and FGF signaling in amphibian limb regeneration". Development (Cambridge, England) 122 (11): 3487–97. PMID 8951064.
- Souppouris, Aaron (2013-05-23). "Scientists identify cell that could hold the secret to limb regeneration". the verge.com.
Macrophages are a type of repairing cell that devour dead cells and pathogens, and trigger other immune cells to respond to pathogens.
- "Do Salamanders' Immune Systems Hold the Key to Regeneration?". ScienceDaily. Retrieved 21 May 2013.
- Endo T, Bryant SV, Gardiner DM (June 2004). "A stepwise model system for limb regeneration". Developmental Biology 270 (1): 135–45. doi:10.1016/j.ydbio.2004.02.016. PMID 15136146.
- Korneluk, Liversage, Robert, Richard (02/06/2005). "Regenerative response of amputated forelimbs of Xenopus laevis froglets to partial denervation". Journal of Morphology. Check date values in:
- Korneluk, Liversage, Robert, Richard (02/06/2015). "Regenerative response of amputated forelimbs of Xenopus laevis froglets to partial denervation". Journal of Morphology. Check date values in:
- Reya, Clevers, Tannishtha, Hans (2005-04-15). "Wnt signalling in stem cells and cancer". International Weekly Journal of Science.
- Reya, Clevers, Tannishtha, Hans (2005-04-14). "Wnt signalling in stem cells and cancer". International Weekly Journal of Science.
- Kragl, M (2009). "Cells keep a memory of their tissue origin during axolotl limb regeneration.". Nature 460 (7251): 60–65. doi:10.1038/nature08152. PMID 19571878.
- Muneoka, K (1986). "Cellular contribution from dermis and cartilage to the regenerating limb blastema in axolotls.". Dev Biol. 116 (1): 256–260. doi:10.1016/0012-1606(86)90062-x. PMID 3732605.
- Bryant, S (2002). "Vertebrate limb regeneration and the origin of limb stem cells". Int. J. Dev. Biol 46 (7): 887–896. PMID 12455626.
- Bosch, Thomas C.G. (2007). "Why Polyps Regenerate and We Don't: Towards a Cellular and Molecular Framework for Hydra Regeneration". Developmental Biology (Elsevier). 303.2: 421–33. doi:10.1016/j.ydbio.2006.12.012. Cite error: Invalid
<ref>tag; name ":3" defined multiple times with different content (see the help page).
- Wenger, Yvan; Buzgariu, Wanda; Reiter, Silke; Galliot, Brigitte (2014). "Injury-Induced Immune Responses in Hydra". Seminars in Immunology (Elsevier) 26.4: 277–94. doi:10.1016/j.smim.2014.06.004.
- Buzgariu, Wanda; Crescenzi, Marco; Galliot, Brigitte (2014). Science Direct. "Robust G2 Pausing of Adult Stem Cells in Hydra". Differentiation (Elsevier). 87.1-2: 83–99.
- T.H. Morgan (1901). Regeneration.
- Agata, Kiyokazu (2007). "Unifying Principles of Regeneration I: Epimorphosis versus Morphallaxis". Department of Biophysics. Wiley Online Library (Kyoto University).
- Vorontsova, M.A.; Liosner, L.D. (1960). Asexual Reproduction and Regeneration. London: Pergamon Press. pp. 367–371.
- Sidorova, V. F. (1962-07-01). "Liver regeneration in birds". Bulletin of Experimental Biology and Medicine 52 (6): 1426–1429. doi:10.1007/BF00785312. ISSN 0007-4888.
- Cotanche, Douglas A.; Lee, Kenneth H.; Stone, Jennifer S.; Picard, Daniel A. (1994-01-01). "Hair cell regeneration in the bird cochlea following noise damage or ototoxic drug damage". Anatomy and Embryology 189 (1): 1–18. doi:10.1007/BF00193125. ISSN 0340-2061.
- Coleman, Cynthia M. (2008-09-01). "Chicken embryo as a model for regenerative medicine". Birth Defects Research Part C: Embryo Today: Reviews 84 (3): 245–256. doi:10.1002/bdrc.20133. ISSN 1542-9768.
- Özpolat, B. Duygu; Zapata, Mariana; Daniel Frugé, John; Coote, Jeffrey; Lee, Jangwoo; Muneoka, Ken; Anderson, Rosalie (2012-12-15). "Regeneration of the elbow joint in the developing chick embryo recapitulates development". Developmental Biology 372 (2): 229–238. doi:10.1016/j.ydbio.2012.09.020. PMC 3501998. PMID 23036343.
- Hosker, Anne (1936). "Regeneration of Feathers after Thyroid Feeding". Journal of Experimental Biology.
- Seifert, Ashley (2012). "The influence of fundamental traits on mechanisms controlling appendage regeneration.". Biological Reviews.
- Kresie, Lesley (2001). "Artificial blood: an update on current red cell and platelet substitutes". Baylor Health Care 14: 158–61. PMC 1291332. PMID 16369608.
- Li, Chunyi (2013). "Morphogenetic Mechanisms in the Cyclic Regeneration of Hair Follicles and Deer Antlers from Stem Cells". Biomed Resouces International 2013: 643601. doi:10.1155/2013/643601. PMC 3870647. PMID 24383056.
- Price, J (2004). "Exploring the Mechanisms Regulating Regeneration of Deer Antlers". The Royal Society.
- Fernando (2011). "Wound healing and blastema formation in regenerating digit tips of adult mice.". Developmental Biology.
- Seifert, A.W.; et al. (2012). "Skin shedding and tissue regeneration in African spiny mice (Acomys)". Nature. pp. 561–565. doi:10.1038/nature11499. Retrieved 2012-01-24.
- Becker RO (Jan 1972). "Stimulation of Partial Limb Regeneration in Rats" (PDF). Nature 235 (14): 109–111. Bibcode:1972Natur.235..109B. doi:10.1038/235109a0.
- Becker RO (May 1972). "Electrical stimulation of partial limb regeneration in mammals". Bull N Y Acad Med 48 (4): 627–41. PMC 1806700. PMID 4503923.
- Masinde G, Li X, Baylink DJ, Nguyen B, Mohan S (April 2005). "Isolation of wound healing/regeneration genes using restrictive fragment differential display-PCR in MRL/MPJ and C57BL/6 mice". Biochemical and Biophysical Research Communications 330 (1): 117–22. doi:10.1016/j.bbrc.2005.02.143. PMID 15781240.
- Mansuo L. Hayashi, B. S. Shankaranarayana Rao, Jin-Soo Seo, Han-Saem Choi, Bridget M. Dolan, Se-Young Choi, Sumantra Chattarji, and Susumu Tonegawa; Rao; Seo; Choi; Dolan; Choi; Chattarji; Tonegawa (July 2007). "Inhibition of p21-activated kinase rescues symptoms of fragile X syndrome in mice". Proceedings of the National Academy of Sciences 104 (27): 11489–94. Bibcode:2007PNAS..10411489H. doi:10.1073/pnas.0705003104. PMC 1899186. PMID 17592139.
- Bedelbaeva K, Snyder A, Gourevitch D, Clark L, Zhang X-M, Leferovich J, Cheverud JM, Lieberman P, Heber-Katz E; Snyder; Gourevitch; Clark; Zhang; Leferovich; Cheverud; Lieberman; Heber-Katz (March 2010). "Lack of p21 expression links cell cycle control and appendage regeneration in mice". Proceedings of the National Academy of Sciences 107 (11): 5845–50. Bibcode:2010PNAS..107.5845B. doi:10.1073/pnas.1000830107. PMC 2851923. PMID 20231440. Lay summary – PhysOrg.com.
- Humans Could Regenerate Tissue Like Newts By Switching Off a Single Gene
- Abdullah I, Lepore JJ, Epstein JA, Parmacek MS, Gruber PJ (Mar–April 2005). "MRL mice fail to heal the heart in response to ischemia-reperfusion injury". Wound Repair Regen 13 (2): 205–208. doi:10.1111/j.1067-1927.2005.130212.x. PMID 15828946. Check date values in:
- Min, Su; Wang, Song W.; Orr, William (2006). "Graphic general pathology: 2.2 complete regeneration:". Pathology. pathol.med.stu.edu.cn. Retrieved 2013-11-10.
After the repair process has been completed, the structure and function of the injured tissue are completely normal. This type of regeneration is common in physiological situations. Examples of physiological regeneration are the continual replacement of cells of the skin and repair of the endometrium after menstruation. Complete regeneration can occur in pathological situations in tissues that have good regenerative capacity.
- Mohammadi, Dara (4 October 2014). "Bioengineered organs: The story so far…". theguardian.com. Retrieved 9 March 2015.
- Carlson, B.M. (2007). Principles of Regenerative Biology. Elsevier Inc. pp. 25–26.
- Ferenczy, A (1979). "Proliferation kinetics of human endometrium during the normal menstrual cycle.". Am J Obstet Gynecol. PMID 434029.
- Michalopoulos, G.K. (1997). "Liver Regeneration". Science.
- Taub, R. (2004). "Liver Regeneration: From Myth to Mechanism". Science.
- Kawasaki, S (1992). "Liver regeneration in recipients and donors after transplantation.". Lancet 339: 580–1. doi:10.1016/0140-6736(92)90867-3. PMID 1347095.
- Spalding, K.L. (2013). "Dynamics of Hippocampal Neurogenesis in Adult Humans". Cell 153: 1219–27. doi:10.1016/j.cell.2013.05.002. PMC 4394608. PMID 23746839.
- Bergmann, O (2009). "Evidence for cardiomyocyte renewal in humans". Science 324: 98–102. doi:10.1126/science.1164680. PMC 2991140. PMID 19342590.
- Beltrami, A.P. (2001). "Evidence that human cardiac myocytes divide after myocardial infarction.". N Engl J Med 344: 1750–7. doi:10.1056/NEJM200106073442303. PMID 11396441.
- McKin, L.H. (1932). "Regeneration of the Distal Phalanx". The Canadian Medical Association Journal 26: 549–50. PMC 402335. PMID 20318716.
- Muneoka, K (2008). "Mammalian regeneration and regenerative medicine". Birth Defects Research 84: 265–80. doi:10.1002/bdrc.20137. PMID 19067422.
- Satheesh, P.J. (2005). "Morphological study of rib regeneration following costectomy in adolescent idiopathic scoliosis". Eur Spine J 14: 772–6. doi:10.1007/s00586-005-0949-8. PMC 3489251. PMID 16047208.
- Alibardi, Lorenzo (2010). "Regeneration in Reptiles and Its Position Among Vertebrates". Morphological and Cellular Aspects of Tail and Limb Regeneration in Lizards a Model System with Implications for Tissue Regeneration in Mammals. Heidelberg:Springer.
- McLean, Katherine E; Vickaryous, Matthew K (2011-08-16). "A novel amniote model of epimorphic regeneration: the leopard gecko, Eublepharis macularius". BMC Developmental Biology 11 (1): 50. doi:10.1186/1471-213x-11-50.
- Bellairs, A.; Bryant, S. (1985). "Autonomy and Regeneration in Reptiles". In Gans, Carl; Billet, F. Biology of the Reptilia 15. New York: John Wiley and Sons. pp. 301–410.
- Brazaitis, Peter (July 31, 1981). "Maxillary Regeneration in a Marsh Crocodile, Crocodylus palustris". Journal of Herpetology 15 (3): 360–362. doi:10.2307/1563441.
- Font, Enrique; Desfilis, Ester; M. Mar, Pérez-Cañellas; García-Verdugo, Jose Manuel (2001). "Neurogenesis and Neuronal Regeneration in the Adult Reptilian Brain". Brain, Behavior and Evolution 58 (5): 276–295. doi:10.1159/000057570.
- http://www.vickaryouslab.com. Missing or empty
- Reichman, O.J. (1984). "Evolution of Regeneration Capabilities". The American Naturalist 123 (6): 752–763. doi:10.1086/284237.
- Lu, Conger (2013). "Study of MicroRNAs Related to the Liver Regeneration of the Whitespotted Bamboo Shark, Chiloscyllium plagiosum". BioMed Research International.
- Vorontsova, M.A.; Liosner, L.D. (1960). Asexual Propagation and Regeneration. Pergamon Press. pp. 367–371.
- Reif, Wolf-Ernst (1978). "Wound Healing in Sharks". Zoomorphology.
- Tanaka EM (October 2003). "Cell differentiation and cell fate during urodele tail and limb regeneration". Curr. Opin. Genet. Dev. 13 (5): 497–501. doi:10.1016/j.gde.2003.08.003. PMID 14550415.
- Nye HL, Cameron JA, Chernoff EA, Stocum DL (February 2003). "Regeneration of the urodele limb: a review". Dev. Dyn. 226 (2): 280–94. doi:10.1002/dvdy.10236. PMID 12557206.
- Yu H, Mohan S, Masinde GL, Baylink DJ (December 2005). "Mapping the dominant wound healing and soft tissue regeneration QTL in MRL x CAST". Mamm. Genome 16 (12): 918–24. doi:10.1007/s00335-005-0077-0. PMID 16341671.
- Gardiner DM, Blumberg B, Komine Y, Bryant SV (June 1995). "Regulation of HoxA expression in developing and regenerating axolotl limbs". Development 121 (6): 1731–41. PMID 7600989.
- Torok MA, Gardiner DM, Shubin NH, Bryant SV (August 1998). "Expression of HoxD genes in developing and regenerating axolotl limbs". Dev. Biol. 200 (2): 225–33. doi:10.1006/dbio.1998.8956. PMID 9705229.
- Putta S, Smith JJ, Walker JA, et al. (August 2004). "From biomedicine to natural history research: EST resources for ambystomatid salamanders". BMC Genomics 5 (1): 54. doi:10.1186/1471-2164-5-54. PMC 509418. PMID 15310388.
- Andrews, Wyatt (March 23, 2008). "Medicine's Cutting Edge: Re-Growing Organs". Sunday Morning (CBS News).
|Wikisource has the text of a 1920 Encyclopedia Americana article about Regeneration.|
- Spallanzani's mouse: a model of restoration and regeneration
- Mice that regrow hearts in the news
- DARPA Grant Supports Research Toward Realizing Tissue Regeneration
- The Geniuses Of Regeneration in BusinessWeek, May 24, 2004
- UCI Limb Regeneration Lab
- A site dedicated to amputation and regeneration
- Regeneration in African spiny mice | https://en.wikipedia.org/wiki/Regeneration_(biology) |
4.03125 | This manual contains a procedure for soil profile construction and its
description according to morphological features. The technique of soil composition
description is based on soil horizons, soil and its horizon thickness, coloring, soil
moisture, mechanical composition, structure and texture, new formations and inclusions.
This field study has instructional video
featuring real students conducting the ecological field techniques in nature. Each video
illustrates the primary instructional outcomes and the major steps in accomplishing the
task including reporting the results.
When conducting complex field studies with students, introduction to the soils
of the studied area is an integral part of environmental research as well as environmental
education. Correctly organized soil studies will enable young researchers to understand
the origin and evolution (developmental process) of ecosystems within the area, and even
to assess developmental perspectives of its vegetation, water regime and fauna.
In field conditions, soils are described and identified according to their appearance,
i.e. morphological features. The soil can be identified according to its
morphological features exactly the same way as we identify minerals, plants or animals.
That is why it is especially important to know how to describe soil type, recording
its morphological features in the field.
Based on morphological features, it is possible to judge the direction and the extent
of the soil-forming process and, more specifically, to classify soil types. However, due
to the difficulties of soil classification and, moreover, differences in systems of soil
classification in different countries, this manual does not cover identification
(determination) of soil types.
The goal of this lesson is to introduce students to the soils of their
region by having them identify soil types according to their morphological features. One
of the most important and meaningful parts of this study is not simply the description of
different soil horizons, but an attempt to reveal functional parts in the described
In this manual we present a general diagram of soil division into functional zones and
corresponding typology of soil horizons. Based on the descriptions according to a unified
procedure presented in the manual, we will be able to identify, classify and compare soils
that are described anywhere on earth.
Procedure of soil profile embedding
It is widely accepted in soil science to dig special pits called soil profiles
(test soil pits) in order to describe soil types, study their morphological features,
define boundaries between different soil types and collect samples for analyses. Any soil
study starts not with the digging of a pit but with selection of pit location.
Selection of location of soil profile
In order to choose the right place for the soil profile, the area should be thoroughly
examined in relation to relief and vegetation. If the relief is flat, the pit is excavated
in a central, most representative part. A slope is excavated in its top, middle and bottom
parts. When studying a river valley, pits should be dug in the floodplain (alluvial flat),
terraces and in the watersheds.
When conducting a complex environmental research of an area, soil profiles should be
embedded, one in each main type of plant associations. If it is a lesson, the
profile should be embedded in one, most typical plant association of the region.
The profile should be dug in the most representative place of the studied territory.
They should not be embedded near roads or ditches, or any location untypical for...
This was only the first page from the manual and its full version you can see in the
Ecological Field Studies 4CD Set:
It is possible to purchase the complete set of 40 seasonal Ecological Field
Study Materials (video in mpg + manuals in pdf formats) in an attractive 4 compact disk set.
These compact disks are compatible with Mac and PC computers.
The teacher background information and manuals can be printed out for easy reference.
The videos are suitable for individual student or whole class instruction. To purchase the complete 4CD set
write a request to the authors (in a free form).
Some of these manuals you can also purchase in the form of applications for Android devices on
Ecological Field Studies Demo Disk:
We also have a free and interesting demonstration disk that explains our ecological field studies approach.
The demo disk has short excerpts from all the seasonal field study videos as well as sample text from all the teacher manuals.
The disk has an entertaining automatic walk through which describes the field study approach and explains how field studies meet education standards.
You can also download the Demo Disc from ecosystema.ru/eng/eftm/CD_Demo.iso.
This is a virtual hybrid (for PC and Mac computers) CD-ROM image (one 563 Mb file "CD_Demo.iso").
You can write this image to the CD and use it in your computer in ordinary way.
You also can use emulator software of virtual CD-ROM drive to play the disk directly from your hard disk.
Other Ecological Field Studies Instructive | http://www.ecosystema.ru/eng/eftm/manuals/a04.htm |
4.65625 | To begin the lesson, you can use the Recursive Rules Overhead to ask students to predict the next term in each sequence and write a recursive rule.
- 5, 15, 45, 135, …
- 4, 12, 36, 108, …
- 1/7, 3/7, 9/7, 27/7, …
- 3, 6, 12, 24, …
- 5, 10, 20, 40, 80, …
- 2.4, 4.8, 9.6, 19.2, …
- 4, -12, 36, -108, …
- -7, 14, -28, 56, …
- 7, -14, 28, -56, …
Recursive Rules Overhead
To trigger students' memories, you may want to remind them that a recursive rule starts with An. Assess students by circulating the room and monitoring student progress. Look for students finding different recursion rules.
- Questions 1-3 can be expressed with any of the following:
- An = 2An – 1 + 3An – 2
- An = An – 1+ 6An – 2
- An = 3An – 1
- Questions 4-6 all have the same recursive rule of either of the following:
- An = An – 1 + 2An – 2
- An = 2An – 1
- Questions 8-9 can be expressed by either of the following:
- An = An – 1 – 2An – 2
- An = -2An – 1
Once students have finished, ask them to write down and explain any
patterns they notice or any conjectures they have. Give them a few
minutes for this. Provide prompts as needed, such as, "I noticed that…"
or "One interesting thing I saw was…" Have students share their
findings with the class.
Students may notice the even though the patterns in
questions 1‑3 start with different values, they have the same recursive
rule (similarly, so do questions 4‑6 as do questions 8‑9). If students
do not see this, be sure to bring it to their attention, and ask them
if they see this happening anywhere besides questions 1‑3. Students may
also notice the common ratio for each series. Unlike problems from the
previous lessons, the common ratio is exact (rather than approximate)
in these problems.
Explain to students that in question 1, to get 15, you do 5 ⋅ 3, and to get 45, you do 15 ⋅ 3, which is the same as 5 ⋅ 3 ⋅ 3.
Ask them to show two or three ways to get 135. One possibility
that should emerge is 5 ⋅ 3 ⋅ 3 ⋅ 3. Show that the terms from
question 1 can be written in exponential form as 5⋅30, 5⋅31, 5⋅32, 5⋅33, ….
You may need to remind students that multiplying by 30
is the same as multiplying by 1. A way to help students grasp this is
to say: How many times did you multiply 5 by 3 to get 45? [2.] Then
ask, how many times did you multiply 5 by 3 to get 15? [1.] Finally
ask, how many times did you multiply 5 by 3 to get 5? [0, as in 5⋅30.]
Ask students to find the exponential form for the other
examples. If time is an issue, students can skip questions 3, 6, and 9.
Be sure students use parentheses on questions 7 and 8. Again,
circulate while students do this to monitor their progress. Answer
questions and help them with any misconception they may have. When
finished, student should be able to represent an exponential series
using a recursive rule and using exponents.
Ask students how the recursive rules relates to the terms with exponents.
To check for understanding, present the following to students:
- For the sequence 3, 12, 48, 192, …, ask students to:
- Find the next term.
- Write a recursive rule.
- Express each term with exponents.
- Give the recursive rule An= -5An – 1, where the first term is 3. Ask students to:
- Write out the first 4 terms in the sequence.
- Express each term using exponents.
- If students are still having trouble, ask them what their
questions are and model an additional example like each of the previous
Student should now have the tools they need to write exponential
functions. To get students to attempt this without direct instruction,
display the overhead below and ask students to complete the table
Recursive and Exponential Rules Overhead
One possible answer is Number of Ways = 2 ⋅ 6n – 1.
The calculator gives Number of Ways = 1/3 ⋅ 6n. Students should recognize that 1/3 is the same as 2/6. Further, students should realize that 2/6 ⋅ 6n is equivalent to 2⋅6n – 1.
You may need to convince your students of this with the following example:
2/7 ⋅ 75 = 2/7 ⋅ 7 ⋅ 7 ⋅ 7 ⋅ 7 ⋅ 7,
which is the same as 2⋅7⋅7⋅7⋅7, since one of the 7s was divided by 7 to give 1.
Thus, 2/7 ⋅ 75 = 2 ⋅ 74.
Do this with 2/6⋅65 if students need further convincing of reducing the exponent by 1.
Ask student to write rules using exponents to find the 5th and nth terms in questions 1‑9 from the Recursive Rules Overhead.
- Benjamin, A. T. and J. J. Quinn. 2003. Proofs That Really Count: The Art of Combinatorial Proof, by Dolciani Mathematical Expositions, Volume 27. Mathematical Association of America.
- Wilf, H. S. 2006 Generatingfunctionology. A. K. Peters, Ltd. http://www.math.upenn.edu/~wilf/DownldGF.html
To assess students give the following:
- Complete the table:
|Train Length (n)
|| 1 || 2 || 3 || 4 || 5 || 6 |
|Number of Ways
- Write the number of way to make a train of length 7 using
exponents. Do the same for a train of length 8. Extend your table from
#1 to check you answer.
- How would you find the number of ways to make a train of length 100? Length 2007
- Write a rule for the number of ways to make a train of length n.
- What rule do you get on the calculator? How does this compare to your answers in #3?
Have students experiment with different starting values for the recursive rule from lesson 1: (An = An-1+An-2).
- They should pick any two numbers that they wish to start with and find the next eight terms in the recursion.
- Repeat these several times with numbers of their own choosing.
- Ask them to record any conjectures they have.
finished, specifically ask them if they notice anything about the
common ratio. Does it behave similar to numbers 1-3, 4-6 and 8-9 from
this lesson? What happens to the common ratio when different starting
values are used in lesson 2?
Questions for Students
1. Can different sequences have the same recursive rule?
[Yes, but they may have different initial values.]
2. What is a number raised to the 0th power?
3. How can you write a rule using exponents for the nth number in a sequence?
[If the sequence is geometric, then the nth term involves multiplication by some constant n times.]
4. How are the recursive rule and the exponential rule related?
[An exponential rule has the form An = a ⋅ bn – 1. When written recursively, the same sequence can be described as A1 = a and An = b ⋅ An – 1. For example, the sequence 3, 12, 48, …, can be described by the exponential rule An = 3 ⋅ 4n – 1 and by the recursive rule An = 4 ⋅ An – 1, where A1 = 3.]
- What did you learn by preparing for this lesson? Did you have any new mathematical or pedagogical insights?
- What did students know already about recursion and exponential growth?
- How does this lesson relate to exponential growth?
- Did the students exceed your expectations in some areas and not meet them in others?
- What were some of the ways that the students illustrated that they were actively engaged in the learning process?
- Did you find it necessary to make adjustments while teaching
the lesson? If so, what adjustments, and were these adjustments | http://illuminations.nctm.org/Lesson.aspx?id=2718 |
4.0625 | Spinous cells, or prickle cells, are keratin producing epidermal cells owing their prickly appearance to their numerous intracellular connections. They make up the stratum spinosum (prickly layer) of the epidermis and provide a continuous net-like layer of protection for underlying tissue. They are susceptible to mutations caused by sunlight and can become malignant.
Spinous cells are found in the superficial layers of the skin. They are found in the stratum spinosum (prickly layer, spinosum layer), which lies above the stratum basale (basal layer) and below the stratum granulosum (granular layer) of the epidermis. The spinous cells are arranged several layers thick to form a net-like covering.
Spinous cells originate through mitosis in the basal layer (also known as the germinative layer). They are pushed upward into the stratum spinosum by the continuous formation of new cells in the basal layer. They reach the outmost layer of the skin as flattened dead flaking skin cells we shed daily. The journey from origin to shed takes 25 to 45 days.
Spinous cells serve “as a physical and biological barrier to the environment, preventing penetration by irritants and allergens and loss of water while maintaining internal homeostasis. They accomplish this in two ways. First, they are keratinocytes (keratin cells) whose primary function is to produce keratin, a strong structural protein. The keratin accumulates within each spinous cell as it moves upward through the epidermis layers, until the cell is almost completely filled with hardening keratin (keratinisation). Second, the cells are bound together across their cytoplasm by keratin filaments that form cell-to-cell connections (desmosomes).
Spinocellular carcinoma (prickle cell carcinoma) is relatively common in people over age 60, with fair skin, and a history of longer term sun exposure. It is not as commonly known as other skin cancers because it is less likely to metastasize, but can be just as deadly if left untreated.
- Marieb, Elaine N. (2003). Human Anatomy & Physiology, Sixth Edition. Prentice Hall College Div. pp. 152–155. ISBN 0-8057-1693-9.
- Bensouilah, Janetta (November 2006). Aromadermatology: Aromatherapy in the Treatment and Care of Common Skin Conditions. Blackwell Publishers. p. 3. ISBN 1857757750.
- Lawton, MSc,RGN, OND, ENB393, RN, Sandra (October 3, 2006). "Anatomy and function of the skin - Part 2 - The epidermis". NursingTimes.net 102 (32): 28. Retrieved April 7, 2012.
- Institute for Dermopharmacy GmbH (June 2003). "New Insights Regarding Light Skin Carcinoma:". DermoTopics Newsletter (1). Retrieved March 15, 2012.
|This cell biology article is a stub. You can help Wikipedia by expanding it.| | https://en.wikipedia.org/wiki/Prickle_cell |
4.125 | How did the earliest birds take wing? Did they fall from trees and learn to flap their forelimbs to avoid crashing? Or did they run along the ground and pump their “arms” to get aloft?
The answer is buried 150 million years in the past, but a new University of California, Berkeley, study provides a new piece of evidence – birds have an innate ability to maneuver in midair, a talent that could have helped their ancestors learn to fly rather than fall from a perch.
The study looked at how baby birds, in this case chukar partridges, pheasant-like game birds from Eurasia, react when they fall upside down.
The researchers, Dennis Evangelista, now a postdoctoral researcher at the University of North Carolina, Chapel Hill, and Robert Dudley, UC Berkeley professor of integrative biology, found that even ungainly, day-old baby birds successfully use their flapping wings to right themselves when they fall from a nest, a skill that improves with age until they become coordinated and graceful flyers.
“From day one, post-hatching, 25 percent of these birds can basically roll in midair and land on their feet when you drop them,” said Dudley, who also is affiliated with the Smithsonian Tropical Research Institute in Balboa, Panama. “This suggests that even rudimentary wings can serve a very useful aerodynamic purpose.”
Flapping and rolling
The nestlings right themselves by pumping their wings asymmetrically to flip or roll. By nine days after hatching, 100 percent of the birds in the study had developed coordinated or symmetric flapping, plus body pitch control to right themselves.
“These abilities develop very quickly after hatching, and occur before other previously described uses of the wings, such as for weight support during wing-assisted incline running,” said Evangelista, who emphasized that no chukar chicks were injured in the process. “The results highlight the importance of maneuvering and control in development and evolution of flight in birds.”
The researchers’ study appeared Aug. 27 in the online journal Biology Letters, published by the Royal Society.
Dudley has argued for a decade that midair maneuverability preceded the development of flapping flight and allowed the ancestors of today’s birds to effectively use their forelimbs as rudimentary wings. The new study shows that aerial righting using uncoordinated, asymmetric wing flapping is a very early development.
Righting behavior probably evolved because “nobody wants to be upside down, and it’s particularly dangerous if you’re falling in midair,” Dudley said. “But once animals without wings have this innate aerial righting behavior, when wings came along it became easier, quicker and more efficient.”
Dudley noted that some scientists hypothesize that true powered flight originated in the theropod dinosaurs, the ancestors to birds, when they used symmetric wing flapping while running up an incline, a behavior known as wing-assisted incline running, or WAIR. WAIR proponents argue that the wings assist running by providing lift, like the spoiler on a race car, and that the ability to steer or maneuver is absent early in evolution.
Falling, gliding and flying
Such activity has never been regularly observed in nature, however, and Dudley favors the scenario that flight developed in tree-dwelling animals falling and eventually evolving the ability to glide and fly. He has documented many ways that animals in the wild, from lizards and lemurs to ants, use various parts of their bodies to avoid hard landings on the ground. Practically every animal that has been tested is able to turn upright, and a great many, even ones that do not look like fliers, have some ability to steer or maneuver in the air.
Contrary to WAIR, maneuvering is very important at all stages of flight evolution and must have been present early, Evangelista said. Seeing it develop first in very young chicks indirectly supports this idea.
“Symmetric flapping while running is certainly one possible context in which rudimentary wings could have been used, but it kicks in rather late in development relative to asymmetric flapping,” Dudley added. “This experiment illustrates that there is a much broader range of aerodynamic capacity available for animals with these tiny, tiny wings than has been previously realized.”
The researchers also tested the young chicks to see if they flapped their wings while running up an incline. None did.
Three former UC Berkeley undergraduates – Sharlene Cam, Tony Huynh and Igor Krivitskiy – worked with Evangelista and Dudley through the Undergraduate Research Apprentice Program (URAP) and are coauthors on the Biology Letters paper. Evangelista was supported by a National Science Foundation Integrative Graduate Education and Research Traineeship (IGERT #DGE-0903711) grant and by grants from the Berkeley Sigma Xi chapter and the national Sigma Xi. | http://scienceblog.com/74140/flapping-baby-birds-offer-clues-origin-flight/ |
4.0625 | List of proposed amendments to the United States Constitution
|This article is part of a series on the|
|Constitution of the
United States of America
|Preamble and Articles
of the Constitution
|Amendments to the Constitution|
|Full text of the Constitution and Amendments|
This list contains proposed amendments to the United States Constitution. Article Five of the United States Constitution provides for two methods for proposing and two methods for the ratification of an amendment. An amendment may be proposed by a two-thirds vote of both the House of Representatives and the Senate or by a national convention called by Congress at the request of two-thirds of the state legislatures. The latter procedure has never been used. Upon adoption by the Congress or a national convention, an amendment must then be ratified by three-fourths of the state legislatures or by special state ratifying conventions in three-fourths of the states. The decision of which ratification method will be used for any given amendment is Congress' alone to make. Only for the 21st amendment was the latter procedure invoked and followed.
Collectively, members of the House and Senate typically propose around 200 amendments during each two–year term of Congress. Most however, never get out of the Congressional committees in which they were proposed, and only a fraction of those that do receive enough support to win Congressional approval to actually go through the constitutional ratification process.
Only 33 such proposals have been adopted by Congress since 1789 and presented to the states for ratification, and of these, only 27 have been ratified. The framers of the Constitution, recognizing the difference between regular legislation and constitutional matters, intended that it be difficult to change the Constitution; but not so difficult as to render it an inflexible instrument of government, as the amendment mechanism in the Articles of Confederation, which required a unanimous vote of thirteen states for ratification, had proven to be. Therefore, a less stringent process for amending the Constitution was established in Article V.
The framers of the Constitution included a proviso at the end of Article V shielding three clauses in the new frame of government from being amended. They are: Article I, Section 9, Clause 1, concerning the migration and importation of slaves; Article I, Section 9, Clause 4, concerning Congress' taxing power; and, Article I, Section 3, Clause 1, which provides for equal representation of the states in the Senate. These are the only textually entrenched provisions of the Constitution. The shield protecting the first two entrenched clauses was absolute but of limited duration; it was in force only until 1808. The shield protecting the third entrenched clause, though less absolute than that covering the others, is practically permanent; it will be in force until there is unanimous agreement among the states favoring a change.
Beginning in the early 20th century, Congress has usually, but not always, stipulated that an amendment must be ratified by the required number of states within seven years from the date of its submission to the states in order to become part of the Constitution. Congress' authority to set ratification deadline was affirmed by the United States Supreme Court in Coleman v. Miller, 307 U.S. 433 (1939).
Amending the United States Constitution is a two-step process. Proposals to amend it must be properly Adopted and Ratified before becoming operative.
A proposed amendment may be adopted and sent to the states for ratification by either:
- The United States Congress, whenever a two-thirds majority in both the Senate and the House of Representatives deem it necessary;
- A national convention, called by Congress for this purpose, on the application of the legislatures of two thirds (presently 34) of the states.
To become part of the Constitution, an adopted amendment must be ratified by either (as determined by Congress):
- The legislatures of three-fourths (presently 38) of the states, within the stipulated time period—if any;
- State ratifying conventions in three-fourths (presently 38) of the states, within the stipulated time period—if any.
Upon being properly ratified, an amendment becomes an operative addition to the Constitution.
Amendments approved by Congress and sent to the states for ratification
Twenty-seven Constitutional Amendments have been ratified since the Constitution was put into operation on March 4, 1789. The first ten amendments were adopted and ratified simultaneously and are known collectively as the Bill of Rights.
- 1st Amendment (December 15, 1791)
- Prohibits the making of any law respecting an establishment of religion, impeding the free exercise of religion, abridging the freedom of speech, infringing on the freedom of the press, interfering with the right to peaceably assemble or prohibiting the petitioning for a governmental redress of grievances.
- 2nd Amendment (December 15, 1791)
- Protects the right to keep and bear arms.
- 3rd Amendment (December 15, 1791)
- Prohibits the forced quartering of soldiers during peacetime.
- 4th Amendment (December 15, 1791)
- 5th Amendment (December 15, 1791)
- 6th Amendment (December 15, 1791)
- 7th Amendment (December 15, 1791)
- 8th Amendment (December 15, 1791)
- 9th Amendment (December 15, 1791)
- Protects rights not enumerated in the constitution.
- 10th Amendment (December 15, 1791)
- Limits the powers of the federal government to those delegated to it by the Constitution.
- 11th Amendment (February 7, 1795)
- Makes states immune from suits from out-of-state citizens and foreigners not living within the state borders; lays the foundation for sovereign immunity.
- 12th Amendment (June 15, 1804)
- Revises presidential election procedures.
- 13th Amendment (December 6, 1865)
- 14th Amendment (July 9, 1868)
- 15th Amendment (February 3, 1870)
- Prohibits the denial of the right to vote based on race, color, or previous condition of servitude.
- 16th Amendment (February 3, 1913)
- Permits the federal government to collect income tax.
- 17th Amendment (April 8, 1913)
- Establishes the direct election of United States Senators by popular vote.
- 18th Amendment (January 16, 1919)
- 19th Amendment (August 18, 1920)
- Prohibits the denial of the right to vote based on sex.
- 20th Amendment (January 23, 1933)
- Changes the date on which the terms of the President and Vice President (January 20) and Senators and Representatives (January 3) end and begin.
(Sections 1 and 2 took effect October 15, 1933, as stipulated by Section 5; therefore, new terms of senators and representatives began January 3, 1934 and the new terms of the President and Vice President began on January 20, 1937.)
- Changes the date on which the terms of the President and Vice President (January 20) and Senators and Representatives (January 3) end and begin.
- 21st Amendment (December 5, 1933)
- Repeals the 18th Amendment and prohibits the transportation or importation into the United States of alcohol for delivery or use in violation of applicable laws.
- 22nd Amendment (February 27, 1951)
- Limits the number of times that a person can be elected president to twice, and the number of times a person who has served more than two years of a term to which someone else was elected to once.
- 23rd Amendment (March 29, 1961)
- 24th Amendment (January 23, 1964)
- Prohibits the revocation of voting rights for the non-payment of a poll tax.
- 25th Amendment (February 10, 1967)
- Addresses succession to the Presidency and establishes procedures both for filling a vacancy in the office of the Vice President, as well as responding to Presidential disabilities.
- 26th Amendment (July 1, 1971)
- Prohibits the denial of the right of US citizens, eighteen years of age or older, to vote on account of age.
- 27th Amendment (May 7, 1992)
- Delays laws affecting Congressional salary from taking effect until after the next election of representatives.
Six amendments adopted by Congress and sent to the states have not been ratified by the required number of states. Four of these, including one of the twelve Bill of Rights amendments, are still technically open and pending. The other two amendments are closed and no longer pending, one by terms set within the Congressional Resolution proposing it (†) and the other by terms set within the body of the amendment (‡).
- Congressional Apportionment Amendment (pending since September 25, 1789; ratified by 11 states)
- Titles of Nobility Amendment (pending since May 1, 1810; ratified by 12 states)
- Corwin Amendment (pending since March 2, 1861; ratified by 3 states)
- Would make "domestic institutions" (which in 1861 implicitly meant slavery) of the states impervious to the constitutional amendment procedures enshrined within Article Five of the United States Constitution and immune to abolition or interference even by the most compelling Congressional and popular majorities.
- Child Labor Amendment (pending since June 2, 1924; ratified by 28 states)
- Would empower the federal government to regulate child labor.
- Equal Rights Amendment (Ratification period, March 22, 1972 to March 22, 1979/June 30, 1982, amendment failed (†); ratified by 35 states)
- Would have prohibited deprivation of equality of rights (discrimination) by the federal or state governments on account of sex.
- District of Columbia Voting Rights Amendment (Ratification period, August 22, 1978 to August 22, 1985, Amendment failed(‡); ratified by 16 states)
- Would have granted the District of Columbia full representation in the United States Congress as if it were a state, repealed the 23rd Amendment and granted the District full representation in the Electoral College system in addition to full participation in the process by which the Constitution is amended.
Proposed amendments not approved by Congress
Approximately 11,539 measures have been proposed to amend the Constitution from 1789 through January 2, 2013. The following amendments, while introduced by a member of Congress, either died in committee or did not receive a two-thirds vote in both houses of Congress and were therefore not sent to the states for ratification.
Over 1,300 resolutions containing over 1,800 proposals to amend the constitution had been submitted before Congress during the first century of its adoption. Some prominent proposals included:
- Blaine Amendment, proposed in 1875, would have banned public funds from going to religious purposes, in order to prevent Catholics from taking advantage of such funds. Though it failed to pass, many states adopted such provisions.
- Christian Amendment, proposed first in February 1863, would have added acknowledgment of the Christian God in the Preamble to the Constitution. Similar amendments were proposed in 1874, 1896 and 1910 with none passing. The last attempt in 1954 did not come to a vote.
- The Crittenden Compromise, a joint resolution that included six constitutional amendments that would protect slavery. Both the House of Representatives and the Senate rejected it in 1861 and Abraham Lincoln was elected on a platform that opposed the expansion of slavery. The South's reaction to the rejection paved the way for the secession of the Confederate states and the American Civil War.
- Anti-Miscegenation Amendment was proposed by Representative Seaborn Roddenbery, a Democrat from Georgia, in 1912 to forbid interracial marriages nationwide. Similar amendments were proposed by Congressman Andrew King, a Missourian Democrat, in 1871 and by Senator Coleman Blease, a South Carolinian Democrat, in 1928. None was passed by Congress.
- Anti-Polygamy Amendment, proposed by Representative Frederick Gillett, a Massachusetts Republican, on January 24, 1914, and supported by former U.S. Senator from Utah and anti-Mormon activist, Frank J. Cannon, and by the National Reform Association.
- Bricker Amendment, proposed in 1951 by Ohio Senator John W. Bricker, would have limited the federal government's treaty-making power. Opposed by President Dwight Eisenhower, it failed twice to reach the threshold of two-thirds of voting members necessary for passage, the first time by eight votes and the second time by a single vote.
- Death Penalty Abolition Amendment was proposed in 1990, 1992, 1993, and 1995 by Representative Henry González to prohibit the imposition of capital punishment "by any State, Territory, or other jurisdiction within the United States". The amendment was referred to the House Subcommittee on the Constitution, but never made it out of committee.
- Flag Desecration Amendment was first proposed in 1968 to give Congress the power to make acts such as flag burning illegal. During each term of Congress from 1995 to 2005, the proposed amendment was passed by the House of Representatives, but never by the Senate, coming closest during voting on June 27, 2006, with 66 in support and 34 opposed (one vote short).
- Human Life Amendment, first proposed in 1973, would overturn the Roe v. Wade court ruling. A total of 330 proposals using varying texts have been proposed with almost all dying in committee. The only version that reached a formal floor vote, the Hatch-Eagleton Amendment, was rejected by 18 votes in the Senate on June 28, 1983.
- Ludlow Amendment was proposed by Representative Louis Ludlow in 1937. This amendment would have heavily reduced America's ability to be involved in war.
- A balanced budget amendment, in which Congress and the President are forced to balance the budget every year, has been introduced many times.
- School Prayer Amendment proposed on April 9, 2003, to establish that "The people retain the right to pray and to recognize their religious beliefs, heritage, and traditions on public property, including schools."
- God in the Pledge of Allegiance – declaring that it is not an establishment of religion for teachers to lead students in reciting the Pledge of Allegiance (with the words "one Nation under God"), proposed on February 27, 2003, by Oklahoma Representative Frank Lucas.
- Every Vote Counts Amendment – proposed by Congressman Gene Green on September 14, 2004. It would abolish the electoral college.
- Continuity of Government Amendment – after a Senate hearing in 2004 regarding the need for an amendment to ensure continuity of government in the event that many members of Congress become incapacitated, Senator John Cornyn introduced an amendment to allow Congress to temporarily replace members after at least a quarter of either chamber is incapacitated.
- Equal Opportunity to Govern Amendment – proposed by Senator Orrin Hatch. It would allow naturalized citizens with at least twenty years' citizenship to become president.
- Seventeenth Amendment repeal – proposed in 2004 by Georgia Senator Zell Miller. It would reinstate the appointment of Senators by state legislatures as originally required by Article One, Section Three, Clauses One and Three.
- The Federal Marriage Amendment has been introduced in the United States Congress four times: in 2003, 2004, 2005/2006 and 2008 by multiple members of Congress (with support from then-President George W. Bush). It would define marriage and prohibit same-sex marriage, even at the state level.
- Twenty-second Amendment repeal – proposed as early as 1989, various congressmen, including Rep. Barney Frank, Rep. Steny Hoyer, Rep. José Serrano, Rep. Howard Berman, and Sen. Harry Reid, have introduced legislation, but each resolution died before making it out of its respective committee. The current amendment limits the president to two elected terms in office, and up to two years succeeding a President in office. Last action was On January 4, 2013, Rep. José Serrano once again introduced H.J.Res. 15 proposing an Amendment to repeal the 22nd Amendment, as he has done every two years since 1997.
- On January 16, 2009, Senator David Vitter of Louisiana proposed an amendment which would have denied US citizenship to anyone born in the US unless at least one parent were a US citizen, a permanent resident, or in the armed forces.
- On February 25, 2009, Senator Lisa Murkowski, because she believed the District of Columbia House Voting Rights Act of 2009 would be unconstitutional if adopted, proposed a Constitutional amendment that would provide a Representative to the District of Columbia.
- On November 11, 2009, Senator Jim DeMint proposed term limits for the U.S. Congress, where the limit for senators will be two terms for a total of 12 years and for representatives, three terms for a total of six years.
- On November 15, 2011, Representative James P. McGovern introduced the People's Rights Amendment, a proposal to limit the Constitution's protections only to the rights of natural persons, and not corporations. This amendment would overturn the United States Supreme Court decision in Citizens United v. Federal Election Commission.
- On December 8, 2011 Senator Bernie Sanders filed The Saving American Democracy Amendment, which would state that corporations are not entitled to the same constitutional rights as people. It would also ban corporate campaign donations to candidates, and give Congress and the states broad authority to regulate spending in elections. This amendment would overturn the United States Supreme Court decision in Citizens United v. Federal Election Commission.
- Rep. Jesse Jackson, Jr. backed the Right to Vote Amendment, a proposal to explicitly guarantee the right to vote for all legal U.S. citizens and empower Congress to protect this right; he introduced a resolution for the amendment in the 107th, 108th, 109th, 110th, 111th and 112th, all of which died in committee. On May 13, 2013, Reps. Mark Pocan and Keith Ellison re-introduced the bill.
- Convention to propose amendments to the United States Constitution
- Second Constitutional Convention of the United States
- "C-SPAN's Capitol Questions". Retrieved 2008-05-29.
- James J. Kilpatrick, ed. (1961). The Constitution of the United States and Amendments Thereto. Virginia Commission on Constitutional Government. p. 49.
- "Measures Proposed to Amend the Constitution". Statistics & Lists. United States Senate.
- Ames, Herman Vandenburg (1897). The proposed amendments to the Constitution of the United States during the first century of its history. Government Printing Office. p. 19.
- Iversen, Joan (1997). The Antipolygamy Controversy in U.S. Women's Movements: 1880-1925: A Debate on the American Home. NY: Routledge. pp. 243–4. ISBN 9780815320791.
- "Bricker Amendment". Ohio History Central. Retrieved 13 August 2013.
- James V. Saturno, “A Balanced Budget Amendment Constitutional Amendment: Procedural Issues and Legislative History,” Congressional Research Service Report for Congress No. 98-671, August 5, 1998.
- 108th Congress, H.J.Res. 46 at Congress.gov
- 108th Congress, H.J.Res. 26 at Congress.gov
- "GovTrack: H. J. Res. 103 108th]: Text of Legislation, Introduced in House". Govtrack.us. Retrieved 2008-09-06.
- "Statement of Chairman Orrin G. Hatch Before the United States Senate Committee on the Judiciary". Hatch.senate.gov. January 27, 2004. Archived from the original on 2004-04-23.
- 109th Congress, S.J.Res. 6 at Congress.gov
- 111th Congress, H.J.Res. 5. Introduced January 6, 2009.
- 101st Congress, S.J.Res. 36. Sponsored by Harry Reid. January 31, 1989.
- Govtrack.us, H.J.Res. 15: Proposing an amendment to the Constitution of the United States...
- 111th Congress, S.J.Res. 6 at Congress.gov
- 111th Congress, S.J.Res. 11 at Congress.gov
- 111th Congress, S.J.Res. 21 at Congress.gov
- 112th Congress, H.J.Res. 88 at Congress.gov
- Remsen, Nancy (December 8, 2011). "Sen. Bernie Sanders, I–Vt., offers constitutional amendment on corporate "citizenship"". The Burlington Free Press.
- Saving American Democracy Amendment
- 107th Congress, H.J.Res. 72
- 108th Congress, H.J.Res. 28
- 109th Congress, H.J.Res. 28
- 110th Congress, H.J.Res. 28
- 111th Congress, H.J.Res. 28
- 112th Congress, H.J.Res. 28
- Press release (May 13, 2013). "Pocan and Ellison Announce Right to Vote Amendment". Congressman Mark Pocan.
- Some proposed amendments to the United States Constitution
- Unamendments, by Jason Mazzone, Iowa Law Review, Vol. 90, p. 1747–1855, 2005.
- GovTrack: Bills by Subject: Constitutional Amendments
- The Amendment Process
- The Failed Amendments | https://en.wikipedia.org/wiki/List_of_proposed_amendments_to_the_United_States_Constitution |
4.15625 | The study of cognitive development and intelligence has provided a model for studying other aspects of psychological development. For example, researchers have applied the methods and theories originally devised to study children’s minds to aspects of children’s personality and social development. This exercise requires you to identify the approach to intelligence that seems to have been most influential in research on each topic below. Present reasons for each of your choices, and also indicate which approach appears to have been most widely used in the works on personality and social development. Note that this exercise differs from previous ones in that you do not have to identify a best answer; on the contrary, you should develop arguments for each item.
a) Constructivist approach to education is an example of the individual differences approach. The key idea is that students actively construct their own knowledge and understanding with guidance from their teacher. Hence, students’ interests and the specific ways that they learn have to be identified to match them with appropriate teaching strategies. | http://www.chegg.com/homework-help/study-cognitive-development-intelligence-provided-model-stud-chapter-10-problem-1mcq-solution-9780073370217-exc |
4.59375 | Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.
Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.
Types of Examples: Brief, Extended, and Hypothetical
There are many types of examples that a presenter can use to help an audience better understand a topic and the key points of a presentation. These include specific situations, problems, or stories designed to help illustrate a principle, method, or phenomenon. They are useful because they can make an abstractconcept more concrete for an audience by providing a specific case. There are three main types of examples: brief, extended, and hypothetical.
Brief examples are used to further illustrate a point that may not be immediately obvious to all audience members but is not so complex that is requires a more lengthy example. Brief examples can be used by the presenter as an aside or on its own. A presenter may use a brief example in a presentation on politics in explaining the Electoral College. Since many people are familiar with how the Electoral College works, the presenter may just mention that the Electoral College is based on population and a brief example of how it is used to determine an election. In this situation it would not be necessary for a presented to go into a lengthy explanation of the process of the Electoral College since many people are familiar with the process.
Extended examples are used when a presenter is discussing a more complicated topic that they think their audience may be unfamiliar with. In an extended example a speaker may want to use a chart, graph, or other visual aid to help the audience understand the example. An instance in which an extended example could be used includes a presentation in which a speaker is explaining how the "time value of money" principle works in finance . Since this is a concept that people unfamiliar with finance may not immediately understand, a speaker will want to use an equation and other visual aids to further help the audience understand this principle. An extended example will likely take more time to explain than a brief example and will be about a more complex topic.
A hypothetical example is a fictional example that can be used when a speaker is explaining a complicated topic that makes the most sense when it is put into more realistic or relatable terms. For instance, if a presenter is discussing statistical probability, instead of explaining probability in terms of equations, it may make more sense for the presenter to make up a hypothetical example. This could be a story about a girl, Annie, picking 10 pieces of candy from a bag of 50 pieces of candy in which half are blue and half are red and then determining Annie's probability of pulling out 10 total pieces of red candy. A hypothetical example helps the audience to better visualize a topic and relate to the point of the presentation more effectively.
to further illustrate a point that is not obvious, but not very complex either., when a fictional example can help put the topic into more realistic or relatable terms., when a presenter is discussing a complicated topic with which the audience is unfamiliar., or to help the audience better visualize a topic and relate to the point of the presentation. | https://www.boundless.com/communications/textbooks/boundless-communications-textbook/supporting-your-ideas-9/using-examples-46/types-of-examples-brief-extended-and-hypothetical-191-4191/ |
4.09375 | In the Neotropics, some 230 threatened bird species (about half of those that occur in the region) have been extirpated from significant parts of their range. On average, about a third of their total ranges has been lost. Particularly high densities occur in the Atlantic Forests of Brazil owing to extensive destruction of lowland evergreen forest driven by logging, agriculture and urbanisation.
In the Neotropics, some 230 threatened bird species (c.50% of those that occur in the region) have been extirpated from significant parts of their range. On average, c.30% of their total ranges has been lost, varying from <100 km2 (c.40 species) to >20,000 km2 (c.70 species). This analysis is based on a review of areas or sites where species were recorded historically but not recently, or where habitat loss or other threats seem certain to have resulted in their disappearance (analysis of data held in BirdLife’s World Bird Database).
Species in some places have been harder hit than in others. For example, in Cuba, 11 threatened bird species have lost substantial parts of their range across the island owing to widespread habitat destruction and degradation of dry forests, scrub and wetlands. In Argentina, ranges have shrunk significantly for 20 threatened species. These include many that rely on seasonally wet grasslands, which are no longer suitable owing to cattle ranching and drainage. In Brazil, 76 threatened bird species are affected, particularly in the Atlantic Forests, where there has been extensive destruction of lowland evergreen forest owing to logging, agriculture and urbanisation. In some areas 21 threatened birds have disappeared: the highest recorded density of extirpations of threatened birds in the world. Losses of range are inevitably associated with a reduction in the total numbers of individuals and hence an increasing risk of extinction. For example, in the Neotropics there are 17 species on the brink of extinction (categorised as Critically Endangered) that have lost over 99% of their former ranges. Declines in range, and hence in population size, are by no means confined to the Neotropical region. In total, 462 globally threatened bird species (38%) worldwide have been identified as at risk for this reason (BirdLife International 2004).
BirdLife International (2004) Threatened birds of the world 2004. CD-ROM. Cambridge, UK: BirdLife International.
BirdLife International (2004) In the Neotropics, many species have been driven extinct across large parts of their range . Presented as part of the BirdLife State of the world's birds website. Available from: http://www.birdlife.org/datazone/sowb/casestudy/105. Checked: 12/02/2016 | http://www.birdlife.org/datazone/sowb/casestudy/105 |
4.6875 | Electing the President: What Makes for a Great President?Teachers' Edition: Grades 3-6 (Lesson Plans)
This topic is supported by a variety of lessons over a two-week period, with each topic building on the other.
Common Core Standards Correlation:
Key Ideas and Details:
RH.6-8.1. RH.6-8.2. RH.6-8.3.
Craft and Structure:
RH.6-8.4. RH.6-8.5. RH.6-8.6.
Integration of Knowledge and Ideas:
Day One and Two: Qualities of Leadership in a President
A. Introduction (Bell Ringer):
1) Semantic Map/create a Web of ideas for the phrase Qualities of a Leader and write the class responses on the board.
2) With this Semantic Map, have students identify why many Americans believe George Washington was the best president in history.
3) Move to a general review discussion of the HNN Backgrounder, and explain the roles of the president (chief diplomat, executive, and commander-in-chief), and the president’s responsibilities to the American people
Essential Question: What are the qualities that make an effective president of the United States in the 21st century?
B. What does the Constitution assert are the powers of the president?
Have students work in pairs, or groups of three, to read Article 2 of the Constitution, and then create an informational poster on the powers of the President of the United States as understood in the United States Constitution.
Have students present their posters, and then discuss the following:
1) To what extent are the values of leadership from the class list exemplified in the U.S. Constitution?
2) Where do the values from our class list, which we expect the president to uphold, come from?
3) Why doesn’t the Constitution provide these specific values for presidential service?
4) How do we know if a presidential nominee has these qualities?
For a single-day activity, teachers could present their own poster that outlines the information from Article 2 and all its sections, or the video “The Qualities and Qualifications of the President”
Day Three, Four and Five: The Electoral College (OPTIONAL SUPPLEMENT)
A. How does the American system of electing a president compare and contrast with becoming a prime minister in a parliamentary system?
Homework prep: Have students read and/or research how the Electoral College and a parliamentary government election works, and make a diagram of each for their notes. For the Electoral College refer to the University of Missouri's "How the Electoral College Works"; for a parliamentary prime minister, refer to the History Learning Site.
Break the class into eight groups. Evenly assign the following topics to the groups: assess either the strength of the Electoral College, the strength of the parliamentary system, the weakness of the Electoral College, or the weakness of the parliamentary system. Then have the groups share out their responses -- put responses on the board, creating a class chart.
1) Which system would better guarantee that the best man for society wins? Explain. (The students must weigh the pros and cons for each system)
2) Which system best serves the interests of the majority?
B. How have presidential election conflicts between the popular vote and the Electoral College developed?
In class, have students actively read the Introduction to the Electoral College (see attachment) and as a group answer questions 1 – 4
Show the video Election: Presidents and the Constitution
Discuss the conflict between the popular vote and the Electoral College, as exemplified in the Corrupt Bargain of 1828, and the presidential elections of 1876 and 2000.
Refer back to question #5 on the handout: Do you favor replacing the Electoral College with direct popular election of the president? Why or why not?
C. Should the Electoral College be abolished or modified?
Students will debate the question. Based on the arguments below, split the class in two and allow students 10 minutes to prepare and formulate their arguments with examples.
This exercise is an informal debate, which argues the question: Should the Electoral College be replaced with direct popular election of the president?
Arguments for the Electoral College
Arguments for Direct Popular Vote
Enrichment: Students can write a letter to their senator or congressperson either supporting direct popular voting or the Electoral College
Day Six, Seven and Eight: Rate the Presidents
A. What are the qualities of past presidents that are still important in considering today’s president?
1) Assign each student two past presidents from Washington to George W. Bush and assign them to research the achievements and failures of the president’s term(s) in office (you may want to split up FDR’s presidency because of its length and breadth of content). Consider their profession and social status in society as a contributing force in shaping their policies.
1. How technologically developed was society?
2. How did people live on a day-to-day basis that would influence political expectations?
3. What were the general manners and mores of that time’s society?
4. How diverse was the political landscape?
2) Students will present a chart that illustrates their assessment, and includes a well-developed paragraph for each president that justifies their assessment of the president’s term of office (EXCELLENT; VERY GOOD; SATISFACTORY; BELOW SATISFACTORY; FAILURE; DIED TOO EARLY TO ASSESS.)
3) After the presentations have finished, discuss the following:
1. Is there a “type” of president we hold as a standard?
2. To what extent are the values we sought in previous presidents the same as today’s needs/values? Explain.
3. How has society changed that the expectations we have of our president changed?
4. Do we need great presidents or can we get by ok with so-so presidents? Does the president need to be the smartest person in the room?
What are the qualities that make an effective president of the United States in the 21st century?
Have students create a poster campaign of the candidate they support for president in the 2012 election.
Have students create a campaign commercial for the Presidential Election that illustrates the qualities of leadership they believe their candidate has.
Enrichment Beyond the Classroom:
comments powered by Disqus
- Trump Holds Wide Lead in South Carolina
- An All-or-Nothing Fight for the Supreme Court
- Did Trump Really Lose the Debate?
- Scalia’s Death Sets Off Epic Battle
- Democrats See Gift in GOP Blocking Court Nominee
- Quote of the Day
- The Nastiest GOP Debate
- Reaction to the Republican Debate
- The GOP Presidential Debate
- How Clinton Could Respond on Supreme Court Vacancy
- Trump and Clinton Way Ahead in South Carolina
- McConnell Says Senate Will Wait to Replace Scalia
- Antonin Scalia Is Dead
- Clinton Says Sanders Would Be Threat to Obama Legacy
- Internal Tracker Shows Trump Leading in South Carolina
- Ben Carson used an apparently fake Joseph Stalin quote — and the Internet loved it
- Rubio exaggerates in saying it's been 80 years since a 'lame duck' made a Supreme Court nomination
- Humans Hard-Wired to Teach, Anthropologist Says
- Parents outraged after students shown ‘white guilt’ cartoon for Black History Month
- Maryland is once again considering retiring its state song
- Historian at the center of Sanders-Clinton debate
- James Loewen Says Additional Baltimore Confederate Statues Should be Removed
- NYT History Book Reviews: Who Got Noticed this Week?
- A historian’s advice to students thinking of getting a PhD in a tough economic climate
- German historian Heinz Richter cleared of charges | http://historynewsnetwork.org/article/148237 |
4.15625 | Skip to main content
Get your Wikispaces Classroom now:
the easiest way to manage your class.
Pages and Files
Great Plains Region
Guide for the Internet
Southwest Hopi & Pueblo
Southwest Navajo & Apache
Add "All Pages"
Tribes from the California-Intermountain region lived in Arizona, Nevada, Utah, and California.
Some tribes that lived there were Miwok, Pomo, Shoshone and Paiute.
The region contains two different environments.
In the desert, the temperature is extreme heat and cold and little rainfall.
But in the mountain ranges it has mild climate and many redwood trees.
One of the geographical features is the Sierra Nevada Mountains.
The Pacific Ocean also provided food and transportation.
The California tribe got their food by hunting with harpoons.
They ate deer, bear, and fish.
They also had fruits and vegetables they got by going spot to spot picking and gathering.
They stored these foods to have in the winter.
They also had tools and weapons to hunt.
They were made out of bones and carved wood.
They also made spoons and utensils out of wood and bones.
Finally, they caught birds with nets.
That’s how the California tribe got their food.
In winter skin and furs were used for clothes.
It was very food for them and fashion.
Bark was used to make men’s breechcloth which is leather worn between the legs and tied at the waste.
These are all clothes and materials they used.
Homes of the California region were made out of natural materials such as wood, bark, grass, and earth.
To make a hatched home they first dug a shallow hole.
The frame of the house was built by sticking long willow branches in the ground around the edge of the hole and leaning them toward the center.
Walls were formed by weaving vines and twigs through the willow branches.
The willow frame was covered with tulle bundles leaving a small hole at the top to let out smoke.
In the winter, mud was plastered on the outside to make it warmer and more waterproof.
The doorway opening was low and faced away from the direction of the wind.
That’s how the California tribe made a home.
Villages from California tribes had a chief.
The chief had all the responsibilities for the whole village.
The chief and some of the males that lived there had more than one wife.
The chief also was in charge if one of his people was homeless.
He would have to give that person a home to live in.
When boys and girls were old enough they were forced to get married to each other.
Their mother and father had to pick out their daughter’s husband.
Native Americans used lots of different types of tools to hunt and also to eat.
One tool that they used to hunt was a bow and arrow and harpoons. Tools you could use to eat were spoons and forks.
They used deer antlers and other animal bones to make that.
The Native Americans also used tools to hunt, cut stuff, make boats, and paddles.
Sometimes different tribes would trade tools, beads, baskets, animal hides, and bows and arrows.
These are all the different tools that all different tribes used.
Native Americans from the California Region used nature to make art.
They used soft soapstone to carve.
They also made women’s apron from weaving bark.
They used deerskin and stretched it over a frame to make a drum.
People also made jewelry out of clam shells.
The Native Americans from California had whistles and flutes made out of wood, shells, animal hides, and bones.
Some of them also traded shells like money.
Feathers were used for baskets in the California Region.
Thatched homes were made of easy to find resources like branches, stick, straw. They covered a simple frame.
help on how to format text
Turn off "Getting Started" | http://kohlertribes.wikispaces.com/CA-Intermountain+Region?responseToken=099dfdbc51044036ae2bb30fe46aa9d5f |
4.03125 | citizen, member of a state, native or naturalized, who owes allegiance to the government of the state and is entitled to certain rights. Citizens may be said to enjoy the most privileged form of nationality; they are at the furthest extreme from nonnational residents of a state (see alien), but they may also be distinguished from nationals with subject or servile status (e.g., slaves or serfs; see serf, slavery). (It should be noted, however, that in Great Britain and some other constitutional monarchies a citizen is called a subject.)
The term citizen originally designated the inhabitant of a town. In ancient Greece property owners in the city-states were citizens and, as such, might vote and were subject to taxation and military service. Citizenship in the Roman Empire was at first limited to the residents of the city of Rome and was then extended in A.D. 212 to all free inhabitants of the empire. Under feudalism in Europe the concept of national citizenship disappeared. In time, however, city dwellers purchased the immunity of their cities from feudal dues, thereby achieving a privileged position and a power in local government; these rights were akin to those of citizenship and supplied much of the content of later legislation respecting citizenship.
Modern concepts of national citizenship were first developed during the American and French revolutions. Today each country determines what class of persons are its citizens. In some countries citizenship is determined according to the jus sanguinis [Lat., = law of blood], whereby a legitimate child takes its citizenship from its father and an illegitimate child from its mother. In some countries the jus soli [Lat., = law of the soil] governs, and citizenship is determined by place of birth. These divergent systems may lead to conflicts that often result in dual nationality or loss of citizenship (statelessness).
Although the Constitution of the United States, as written in 1787, uses the word citizen and empowers Congress to enact uniform naturalization laws, the term was not defined until the adoption (1868) of the Fourteenth Amendment, which gave citizenship to former black slaves. As this amendment indicates, the United States generally follows the jus soli. However, Congress has also recognized, subject to strict rules, the principle of jus sanguinis so that children born of American parents abroad are citizens during their minority and can retain this citizenship at majority if they meet certain conditions. In addition, in 2000, Congress granted automatic citizenship to most minor children of American parents who were adopted from abroad; previously such adopted children needed to be naturalized. Until the 1940s the United States recognized several classes of nationals who were not citizens, e.g., Filipinos and Puerto Ricans. Today, however, all U.S. nationals are citizens. The United States recognizes the right of voluntary extradition, and in 1967 the Supreme Court ruled that citizenship can be lost only if freely and expressly renounced; Congress does not have the power to take it away.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Political Science: Terms and Concepts | http://www.infoplease.com/encyclopedia/history/citizen.html |
4.03125 | geometry[circle] - define a circle
circle(c, [A, B, C], n, 'centername'=m)
circle(c, [A, B], n, 'centername'=m)
circle(c, [A, rad], n, 'centername'=m)
circle(c, eqn, n, 'centername'=m)
the name of the circle
A, B, C
a number which is the radius of the circle
the algebraic representation of the circle (i.e., a polynomial or an equation)
(optional) list of two names representing the names of the horizontal-axis and vertical-axis
(optional) m is a name of the center of the circle to be created
A circle is the set of all points in a plane that have the same distance from the center.
A circle c can be defined as follows:
from three points A, B, C. The input is a list of three points.
from the two endpoints of a diameter of the circle c. The input is a list of two points.
from the center of c and its radius. The input is a list of two elements where the first element is a point, the second element is a number.
from its internal representation eqn. The input is an equation or a polynomial. If the optional argument n is not given:
if the two environment variables _EnvHorizontalName and _EnvVerticalName are assigned two names, these two names will be used as the names of the horizontal-axis and vertical-axis respectively.
if not, Maple will prompt for input of the names of the axes.
To access the information relating to a circle c, use the following function calls:
returns the form of the geometric object (i.e., circle2d if c is a circle).
returns the name of the center of c.
returns the radius of c.
returns the equation that represents the circle c.
returns the name of the horizontal-axis; or FAIL if the axis is not assigned a name.
returns the name of the vertical-axis; or FAIL if the axis is not assigned a name.
returns a detailed description of the given circle c.
The command with(geometry,circle) allows the use of the abbreviated form of this command.
define circle c1 from three distinct points:
name of the objectc1form of the objectcircle2dname of the centerO1coordinates of the center1,34radius of the circle251616equation of the circlem2+n2−2m−32n=0
define circle c2 (which is the same as c1) from two end points of a diameter
define circle c3 (which is the same as c1) from the center of the circle and its radius
define circle c4 (which is the same as c1) from its algebraic representation
geometry[Apollonius], geometry[area], geometry[AreOrthogonal], geometry[AreTangent], geometry[CircleOfSimilitude], geometry[draw], geometry[FindAngle], geometry[HorizontalName], geometry[intersection], geometry[IsOnCircle], geometry[objects], geometry[Pole], geometry[powerpc], geometry[RadicalAxis], geometry[RadicalCenter], geometry[randpoint], geometry[similitude], geometry[TangentLine], geometry[tangentpc], geometry[VerticalName]
Download Help Document | http://www.maplesoft.com/support/help/Maple/view.aspx?path=geometry/circle |
4.1875 | |This article does not cite any sources. (November 2012)|
Subtelomeres are segments of DNA between telomeric caps and chromatin.
Telomeres are specialized protein–DNA constructs present at the ends of eukaryotic chromosomes, which prevent them from degradation and end-to-end chromosomal fusion. Introductory biology courses often describe telomeres as a type of chromosomal aglet. Most vertebrate telomeric DNA consists of long (TTAGGG)n repeats of variable length, often around 3-20kb. Subtelomeres are segments of DNA between telomeric caps and chromatin. Each chromosome has two subtelomeres immediately adjacent to the long (TTAGGG)n repeats. Subtelomeres are considered to be the most distal (farthest from the centromere) region of unique DNA on a chromosome and they are unusually dynamic and variable mosaics of multichromosomal blocks of sequence. The subtelomeres of such diverse species as Humans, Plasmodium falciparum, Drosophila melanogaster or Saccharomyces cerevisiae, are structurally similar in that they are composed of various repeated elements, but the extent of the subtelomeres and the sequence of the elements vary greatly among organisms. In yeast (S. cerevisiae), subtelomeres are composed of two domains : the proximal and distal (telomeric) domains. The two domains differ in sequence content and extent of homology to other chromosome ends and they are often separated by a stretch of degenerate telomere repeats (TTAGGG) and an element called 'core X', which is found at all chromosome ends and contains an autonomously replicating sequence (ARS) and an ABF1 binding site. The proximal domain is composed of variable interchromosomal duplications (<1-30 kb), this region can contain genes such Pho, Mel, Mal and open reading frames (ORFs). The distal domain is composed of 0-4 tandem copies of the highly conserved Y' element which contains other ORFs, the number and chromosomal distribution of Y′ elements varies among yeast strains. Between the core X and the Y' element or the core X and TTAGGG sequence there is often a set of 4 'subtelomeric repeats elements' (STR) : STR-A, STR-B, STR-C and STR-D which consists of multiple copies of the vertebrate telomeric motif TTAGGG. This two-domain structure is remarkably similar to the subtelomere structure in human chromosomes 20p, 4q and 18p in which proximal and distal subtelomeric domains are separated by a stretch of degenerate TTAGGG repeats, but, the picture that emerges from studies of the subtelomeres of other human chromosomes indicates that the two-domain model does not apply universally.
This structure with repeated sequences is responsible for frequent duplication events (which create new genes) and recombination events, at the origin of combination diversity. These peculiar properties are mechanisms that generate diversity at an individual scale and therefore contribute to adaptation of organisms to their environments. For example, in Plasmodium falciparum during interphase of erythrocytic stage, the chromosomic extremities are gathered at the cell nucleus periphery, where they undergo frequent deletion and telomere position effect (TPE). This event in addition to expansion and deletion of subtelomeric repeats, give rise to chromosome size polymorphisms and so, subtelomeres undergo epigenetic and genetic controls. Thanks to the properties of subtelomeres, Plasmodium falciparum evades host immunity by varying the antigenic and adhesive character of infected erythrocytes (see Subtelomeric transcripts).
Variations of subtelomere
Variation of subtelomeric regions are mostly variation on STRs, due to recombination of large-scale stretches delimited by (TTAGGG)n-like repeated sequences, which play an important role in recombination and transcription. Haplotype (DNA sequence variants) and length differences are therefore observed between individuals.
Subtelomeric transcripts are pseudogenes (transcribed genes producing RNA sequences not translated into protein) and gene families. In humans, they code for olfactory receptors, immunoglobulin heavy chains, and zinc-finger proteins. In other species, several parasites such as Plasmodium and Trypanosoma brucei have developed sophisticated evasion mechanisms to adapt to the hostile environment posed by the host, such as exposing variable surface antigens to escape the immune system. Genes coding for surface antigens in these organisms are located at subtelomeric regions, and it has been speculated that this preferred location facilitates gene switching and expression, and the generation of new variants. For example, the genes belonging to the var family in Plasmodium falciparum (agent of malaria) code for the PfEMP1 (Plasmodium falciparum erythrocytic membrane protein 1), a major virulence factor of erythrocytic stages, var genes are mostly localized in subtelomeric regions. Antigenic variation is orchestrated by epigenetic factors including monoallelic var transcription at separate spatial domains at the nuclear periphery (nuclear pore), differential histone marks on otherwise identical var genes, and var silencing mediated by telomeric heterochromatin. Other factors such as non-coding RNA produced in subtelomeric regions adjacent or within var genes may contribute as well to antigenic variation. In Trypanosoma brucei (agent of sleeping sickness), variable surface glycoprotein (VSG) antigenic variation is a relevant mechanism used by the parasite to evade the host immune system. VSG expression is exclusively subtelomeric and occurs either by in situ activation of a silent VSG gene or by DNA rearrangement that inserts an internal silent copy of a VSG gene into an active telomeric expression site. To contrast with Plasmodium falciparum, in Trypanosoma brucei, antigenic variation is orchestrated by epigenetic and genetic factors. In Pneumocystis jirovecii major surface glycoprotein (MSG) gene family cause antigenic variation. MSG genes are like boxes at chromosome ends and only the MSG gene at the unique locus UCS (upstream conserved sequence) is transcribed. Different MSG genes can occupy the expression site (UCS), suggesting that recombination can take a gene from a pool of silent donors and install it at the expression site, possibly via crossovers, activating transcription of a new MSG gene, and changing the surface antigen of Pneumocystis jirovecii. Switching at the expression site is probably facilitated by the subtelomeric locations of expressed and silent MSG genes. A second subtelomeric gene family, MSR, is not strictly regulated at the transcriptional level, but may contribute to phenotypic diversity. Antigenic variation in P. jirovecii is dominated by genetic regulation.
Loss of telomeric DNA through repeated cycles of cell division is associated with senescence or somatic cell aging. In contrast, germ line and cancer cells possess a telomerase enzyme which prevents telomere degradation and maintains telomere integrity, causing these types of cells to be very long-lived.
In humans, the role of subtelomere disorders is demonstrated in facioscapulohumeral muscular dystrophy (FSHD), Alzheimer's disease, and peculiar syndromic diseases (malformation and mental retardation). For example, FSHD is associated with a deletion in the subtelomeric region of chromosome 4q. A series of 10 to >100 kb repeats is located in the normal 4q subtelomere, but FSHD patients have only 1–10 repeat units. This deletion is thought to cause disease owing to a position effect that influences the transcription of nearby genes, rather than through the loss of the repeat array itself.
Analysis of subtelomere
Subtelomere analysis, especially sequencing and profiling of patient subtelomeres, is difficult because of the repeated sequences, length of stretches, and lack of databases on the topic.
- Mefford HC, Trask BJ (February 2002). "[The complex structure and dynamic evolution of human subtelomeres]". Nature Reviews Genetics 3: 91–102. doi:10.1038/nrg727.
- Louis EJ, Naumova ES, Lee A, Naumov G, Haber JE (1994). "[The chromosome end in yeast: its mosaic nature and influence on recombinational dynamics]". Genetics 136: 789–802. PMC 1205885. PMID 8005434.
- Walmsley RW, Chan CS, Tye BK, Petes TD (July 1984). "[Unusual DNA sequences associated with the ends of yeast chromosomes]". Nature 310: 157–160. doi:10.1038/310157a0.
- Coissac E, Maillier E, Robineau S, Netter P (December 1996). "[Sequence of a 39,411 bp DNA fragment covering the left end of chromosome VII of Saccharomyces cerevisiae]". Yeast 12 (15): 1555–1562. doi:10.1002/(SICI)1097-0061(199612)12:15<1555::AID-YEA43>3.0.CO;2-Q.
- Louis EJ, Haber JE (1992). "[The structure and evolution of subtelomeric Y′ repeats in Saccharomyces cerevisiae]". Genetics 131: 559–574. PMC 1205030. PMID 1628806.
- Louis EJ (December 1995). "[The chromosome ends of Saccharomyces cerevisiae]". Yeast 11 (16): 1553–1573. doi:10.1002/yea.320111604.
- Rubio JP, Thompson JK, Cowman AF (1996). "[The var genes of Plasmodium falciparum are located in the subtelomeric region of most chromosomes]". The EMBO Journal 15 (15): 4069–4077. PMC 452127. PMID 8670911.
- Su XR, Heatwoie VM, Wertheimer SP, Guinet F, Hertfeldt JA, Peterson DS, Ravetch JA, Weilems TE (July 1994). "[The Large Diverse Gene Family var Encodes Proteins Involved in Cytoadherence and Antigenic Variation of Plasmodium falciparum-Infected Erythrocytes]". Cell 82: 89–100. doi:10.1016/0092-8674(95)90055-1. line feed character in
|title=at position 53 (help)
- Cano MIN (September 2001). "[Telomere biology of Trypanosomatids: more questions than answers]". Trends in Parasitology 17 (9,1): 425–429. doi:10.1016/S1471-4922(01)02014-1.
- Barry JD, Ginger ML, Burton P, McCulloch R (January 2003). "[Why are parasite contingency genes often associated with telomeres?]". International Journal for Parasitology 33 (1): 29–45. doi:10.1016/S0020-7519(02)00247-3.
- Scherf A, Lopez-Rubio JJ, Riviere L (October 2008). "[Antigenic Variation in Plasmodium falciparum]". Annual Review of Microbiology 62: 445–470. doi:10.1146/annurev.micro.61.080706.093134.
- Guizetti J, Scherf A (February 2013). "[Silence, activate, poise and switch! Mechanisms of antigenic variation in Plasmodium falciparum]". Cellular Microbiology 15 (5): 718–726. doi:10.1111/cmi.12115.
- Cross GAM (February 2005). "[Antigenic variation in trypansosomes: Secrets surface slowly]". BioEssays 18 (4): 283–291. doi:10.1002/bies.950180406.
- Rudenko G (October 2000). "[The polymorphic telomeres of the African trypanosome Trypanosoma brucei]". Biochemical Society Transactions 28 (5): 536–540. doi:10.1042/bst0280536. PMC 3375589. PMID 11044370.
- Stringer JR (2014). Edward JL, Becker MM, ed. [Pneumocystis carinii Subtelomeres]. pp. 101–115. doi:10.1007/978-3-642-41566-1_5.
- Stringer JR, Keely SP (February 2001). "[Genetics of Surface Antigen Expression in Pneumocystis carinii]". Infection and immunity 69 (2): 627–639. doi:10.1128/iai.69.2.627-639.2001. PMC 97933. PMID 11159949.
- The flow of genetic information—PDF file. See Table 5.5 | https://en.wikipedia.org/wiki/Subtelomeric |
4.15625 | More than four-fifths of the single points of light we observe in the night sky are actually two or more stars orbiting together. The most common of the multiple star systems are binary stars, systems of only two stars together. These pairs come in an array of configurations that help scientists to classify stars, and could have impacts on the development of life. Some people even think that the sun is part of a binary system.
Binary stars are two stars orbiting a common center of mass. The brighter star is officially classified as the primary star, while the dimmer of the two is the secondary (classified as A and B respectively). In cases where the stars are of equal brightness, the designation given by the discoverer is respected.
Binary pairs can be classified based on their orbit. Wide binaries are stars that have orbits which keep them spread apart from one another. These stars evolve separately, with very little impact from their companions. They may have once contained a third star, which booted the distant companion outward while eventually having been ejected themselves.
Close binaries, on the other hand, evolve nearby, able to transfer their mass from one to the other. The primaries of some close binaries consume the material from their companion, sometimes exerting a gravitational force strong enough to pull the smaller star in completely. [Infographic: How 'Tatooine' Planets Orbit Twin Stars of Kepler-47]
The pairs can also be classified based on how they are observed, a system which has overlapping categories. Visual binaries are two stars with a wide enough separation that both can be viewed through a telescope, or even with a pair of binoculars. Five to 10 percent of visible stars are visual binaries.
Spectroscopic binaries appear close even when viewed through a telescope. Scientists must measure the wavelengths of the light the stars emit and determine their binary nature based on features of those measurements.
Eclipsing binaries are two stars whose orbits are at an angle so that, from Earth, one passes in front of the other, causing an eclipse. This feature is based on the line of sight rather than any particular feature of the pair.
Astrometric binaries are stars that seem to dance around an empty space; that is, their companions cannot be identified but only inferred. Such a companion may be too dim to be seen, or could be hidden in the glare from the primary star.
Stars referred to as double stars are two that appear close together in the sky visually, but are not necessarily anywhere near one another in space.
Discovery and evolution
The first binary stars seen were visual binaries. In 1617, at the request of a fellow scientist, Galileo Galilei turned his telescope toward the second star from the end of the handle of the Big Dipper, discovering that one star seemed to be two; ultimately it turned out to be six. In 1802, Sir William Herschel, who cataloged about 700 pairs of star, first used the term "binary" in reference to these double stars.
Stars travel around the galaxy, and sometimes a massive star captures a passing one, creating a new binary pair. But this is a rare event. More commonly, the envelope of gas and dust that collapses in on itself to form a star splits and forms two or more stars instead. These stars evolve together, though not necessarily identically.
How a pair of stars evolve depends on their distance from each other. Wide binaries have very little effect on each other, and so they often evolve much like single stars. Close binaries, however, impact each other's evolution, with mass transfers changing the composition of the stars. If one star in a close binary system explodes in a supernova or sheds its outer layers and forms a pulsar, often the companion is destroyed. If it survives, it continues to orbit the newly formed body, perhaps passing on more of its material.
Binary star systems provide the best means for scientists to determine the mass of a star. As the pair pulls on one another, astronomers can calculate its size, and from there determine characteristics such as temperature and radius. These factors help characterize single main sequence stars in the universe.
Stars in multiple systems can have a direct impact on life. A host of planets have already been found orbiting multiple stars. The orbit of these stars can affect the evolution of life, which needs a relatively stable system to develop in. Though binary and multiple systems appear initially daunting, given that one or more stars are constantly moving closer and farther from the planets and changing the amount of light, heat, and radiation they receive, systems such as wide binaries or close binaries could actually produce conditions where life could eventually evolve. [9 Exoplanets That Could Host Alien Life]
Is the sun a binary star?
In the 1980s, scientists suggested the presence of Nemesis, a second star — either a brown dwarf, dim red dwarf or white dwarf — in the sun's system as a reason behind the periodic mass extinctions that occurred in Earth's history, which some paleontologists suggest have occurred in 26-million-year cycles, though the cyclical nature is under debate.
In 2010, NASA's Wide-field Infrared Survey Explorer (WISE) began searching for brown dwarfs, though it isn't searching specifically for one in the solar system. But if a companion exists, WISE should turn it up. Neither WISE nor the Two Micron All Sky Survey has turned up signs of a companion, and on NASA's "Ask an Astrobiologist", David Morrison, Astrobiology Senior Scientist, states that such an object would have been clearly detected by these sensitive telescopes. | http://www.space.com/22509-binary-stars.html |
4 | The Boreal forest is very important in the world’s ecosystem. The forest stretches over 3,000 miles and is home to trees and wetlands that moderate our climate and purify the world’s air. According to the Canadian Boreal Initiative, it is one of the world’s largest intact forest ecosystems. It is the home of thousands of species of animal and insects, but one may be losing its habitat — the caribou.
The caribou or reindeer has already seen subspecies go extinct.
A memo was sent to Canada’s environment minister Peter Kent telling him that there is an elevated risk of the caribou disappearing after losing its home in the regions of the Boreal that surrounds oil sands developments in Alberta. Developers were meant to restore the habitat but the memo, signed by a deputy minister in the department, indicates that it could take decades to restore the area. By the time it happens, seven different types of caribou may have disappeared completely.
The Postmedia report indicates that the memo was sent to Kent as part of preparations for a lawsuit begun by environmental groups who claimed that the government was not enforcing the Species at Risk Act. Since the suit began in 2011, the government introduced Bill C-38 which further cut protection programs.
The memo, which was found through Access to Information, indicates that Kent opted to ignore indications that the Alberta caribou were at risk since there was no national-scale danger of extinction.
The big question is, if the caribou is losing its home in Alberta’s Boreal, then what will be next? The federal government is said to be reviewing the Species at Risk Act in an effort to strengthen it, but their environmental record does not leave a lot of room for optimism.
Photo Credit: Bruce McKay | http://www.care2.com/causes/oil-sands-kicking-caribou-out-of-their-forest-habitat.html |
4.09375 | Suppose that the entrance to a dog house is a square, with a height of inches and a width of inches. Because your puppy has grown, you need to increase both the height and the width by 4 inches. What would be the area of the resulting entrance to the dog house? If the height had been increased by 4 inches while the width had been decreased by 4 inches, what would the area have been then? In this Concept, you'll learn about special products of polynomials so that you'll know what to do in situations like these.
When we multiply two linear (degree of 1) binomials, we create a quadratic (degree of 2) polynomial with four terms. The middle terms are like terms so we can combine them and simplify to get a quadratic or degree trinomial (polynomial with three terms). In this Concept, we will talk about some special products of binomials.
Finding the Square of a Binomial
A special binomial product is the square of a binomial. Consider the following multiplication: . We are multiplying the same expression by itself, which means that we are squaring the expression. This means that:
This follows the general pattern of the following rule.
Square of a Binomial: , and
Stay aware of the common mistake . To see why , try substituting numbers for and into the equation (for example, and ), and you will see that it is not a true statement. The middle term, , is needed to make the equation work.
Simplify by multiplying: .
Solution: Use the square of a binomial formula, substituting and
Finding the Product of Binomials Using Sum and Difference Patterns
Another special binomial product is the product of a sum and a difference of terms. For example, let’s multiply the following binomials.
Notice that the middle terms are opposites of each other, so they cancel out when we collect like terms. This always happens when we multiply a sum and difference of the same terms.
When multiplying a sum and difference of the same two terms, the middle terms cancel out. We get the square of the first term minus the square of the second term. You should remember this formula.
Sum and Difference Formula:
Multiply the following binomias and simplify.
Solution: Use the above formula, with and . Multiply.
Solving Real-World Problems Using Special Products of Polynomials
Let’s now see how special products of polynomials apply to geometry problems and to mental arithmetic. Look at the following example.
Find the area of the square.
Notice that this gives a visual explanation of the square of binomials product.
The next example shows how to use the special products in doing fast mental calculations.
Find the products of the following numbers without using a calculator.
Solution: The key to these mental “tricks” is to rewrite each number as a sum or difference of numbers you know how to square easily.
(a) Rewrite and .
In order to get the hang of the patterns involved in special products, apply the distributive property to see what will happen:
Notice how the two middle terms canceled each other out. This always happens, which is where we get the sum and difference product. Compare the answer above to that from using the sum and difference product:
The two answers are the same. You can use the sum and difference product as a shortcut, so you don't always have to go through the whole process of multiplying out using the distributive property.
Sample explanations for some of the practice exercises below are available by viewing the following video. Note that there is not always a match between the number of the practice exercise in the video and the number of the practice exercise listed in the following exercise set. However, the practice exercise is the same in both. CK-12 Basic Algebra: Special Products of Binomials (10:36)
Use the special product for squaring binomials to multiply these expressions.
Use the special product of a sum and difference to multiply these expressions.
Find the area of the red square in the following figure. It is the lower right shaded box.
Multiply the following numbers using the special products.
Answers for Explore More Problems
To view the Explore More answers, open this PDF file and look for section 9.5. | http://www.ck12.org/algebra/Special-Products-of-Polynomials/lesson/Special-Products-of-Polynomials/ |
4.03125 | Details about Loose Leaf Introduction to Chemistry:
1 Matter and Energy 2 Atoms, Ions, and the Periodic Table 3 Chemical Compounds 4 Chemical Composition 5 Chemical Reactions and Equations 6 Quantities in Chemical Reactions 7 Electron Structure of the Atom 8 Chemical Bonding 9 The Gaseous State 10 The Liquid and Solid States 11 Solutions 12 Reaction Rates and Chemical Equilibrium 13 Acids and Bases 14 Oxidation-Reduction Reactions 15 Nuclear Chemistry 16 Organic Chemistry 17 BiochemistryAppendix A Useful Reference Information Appendix B Toolboxes Appendix C Answers to Practice Problems Appendix D Answers to Selected Questions and Problems
Back to top
Rent Loose Leaf Introduction to Chemistry 2nd edition today, or search our site for other textbooks by Rich Bauer. Every textbook comes with a 21-day "Any Reason" guarantee. Published by McGraw-Hill Science/Engineering/Math. | http://www.chegg.com/textbooks/loose-leaf-introduction-to-chemistry-2nd-edition-9780077371531-0077371534 |
4.25 | Square kilometre array
|Rainer Beck (2010), Scholarpedia, 5(3):9321.||doi:10.4249/scholarpedia.9321||revision #152608 [link to/cite this article]|
Understanding the evolution of the Universe, galaxies and stars requires looking back in time as far as possible. But the radiation from distant objects is incredibly weak and its detection needs huge collecting areas. Increasing sensitivity provided by the collecting area will reveal new classes of cosmic objects, distant and nearby, which are too faint or too short-lived to have been detected so far. One of these huge telescopes is the Square Kilometre Array (SKA) in the radio wavelength part of the electromagnetic spectrum, planned for a dual-site construction between 2018 and 2025 in South Africa and Australia. Radio waves carry signals from gas clouds emitted even before the formation of the first stars. The SKA will also constrain fundamental physics on gravitation and magnetism. It will conduct astro-biological observations, potentially including the detection of life elsewhere in the Universe via their radio signals.
Radio waves provide a number of advantages: unlike optical waves, they are not absorbed by interstellar dust and they mostly do not suffer from distortions in the atmosphere, except for the shortest wavelengths of a few mm and below. The radio window for ground-based observations spans frequencies from about 10 MHz (30 m wavelength), below which the Earth's ionosphere blocks cosmic radio waves, to frequencies between 10 GHz (3 cm) and 1 THz (0.3 mm), depending on height above sea-level and water content of the troposphere.
Radio waves emerge from objects widely different from the well-known sources of light. Observations at radio wavelengths led to the modern view of the Universe: discovery of the cosmic microwave background (CMB, see below), the first notion of non-thermal emission from charged particles in magnetic fields, discovery of quasars, pulsars, masers and extrasolar planets. Some of the most spectacular objects in the Universe are radio sources whose radiation is emitted from hot gas and charged particles around black holes (quasars) and in the magnetospheres around neutron stars (pulsars), remainders of supernova explosions. Cold gas in galaxies, invisible in the optical range, can be radio-bright when emitting in specific radio spectral lines. Radio waves tell us that the Universe does not only consist of stars, gas and dark matter, but is also permeated by superfast "cosmic ray" particles and magnetic fields which emit synchrotron emission over a wide (continuous) frequency range in the radio, while they escape detection in most other spectral ranges.
Radio astronomy is another window to the Universe where known objects look different and new objects shine. The radio window allows us to look deep into space and hence deep into the past, and we can observe how the gas, fast particles and magnetic fields have developed over time. Scientists worldwide are extremely excited about the possibilities offered by the SKA.
Key Science Projects
The large investment in the SKA requires convincing justification. Apart from the expected technological spin-offs, five main science questions ("Key Science Projects") drive the SKA (see the SKA homepage and Further Reading below for details):
- Probing the dark ages
The SKA will use the emission of neutral hydrogen to observe the most distant objects in the Universe. The strongest line emission of hydrogen is in the radio range at a frequency of 1.4 GHz (21 cm wavelength) which corresponds to the energy difference of the hyperfine transition when the spin of the electron flips with respect to that of the proton.
According to present-day cosmological models, the Universe became transparent about 380,000 years after the big bang (at a redshift of about 1100). The radiation released at that time is now prominent in the radio range as the Cosmic Microwave Background (CMB) (Durrer 2008), measured in great detail by NASA’s WMAP satellite and since 2009 by ESA’s PLANCK satellite . Matter (mostly hydrogen) remained neutral and smoothly distributed over the next billion years, called the dark ages, until the first stars and black holes formed, followed by the formation of galaxies. The energy output from the first energetic stars and the jets launched near young black holes (quasars) started to heat the neutral gas, forming bubbles of ionized gas as structure emerged. This is called the Epoch or Reionization (see Fan et al. 2006 for a review). The signatures from this exciting transition phase should still be observable with help of the radio line of hydrogen, though extremely redshifted by a factor of about 10 when arriving at our telescopes today (Fig. 1). The lowest SKA frequency will allow us to detect hydrogen at redshifts of up to 20, well into the dark ages, to search for the transition from a neutral to an ionized Universe, and hence provide a critical test of our present-day cosmological model.
- Galaxy evolution, cosmology, and dark energy
The expansion of the Universe is currently accelerating, a poorly understood phenomenon, for which a multitude of possible explanations have been proposed: Einstein's cosmological constant , a time-dependent energy called quintessence, topological defects, the effects of "other" Universes and many more. Since the correct answer is not known, physicists and astronomers named the phenomenon dark energy (see also Frieman et al. 2008 for a review).
One important method of distinguishing between these various explanations is to compare the distribution of galaxies at different epochs in the evolution of the Universe to the distribution of matter at the time when the Cosmic Microwave Background (CMB, see above) was formed, about 380,000 years after the Big Bang. Small distortions ("ripples") in the distribution of matter, called baryon acoustic oscillations, should persist from the era of CMB formation until today. Tracking if and how these ripples change in size and spacing over cosmic time can then tell us if one of the existing models for dark energy is correct or if a new idea is needed.
The SKA will use the hydrogen emission from galaxies to measure the properties of dark energy. The strongest line emission of hydrogen is in the radio range at a frequency of 1.4 GHz (21 cm wavelength), but redshifted to lower frequencies/longer wavelengths for distant galaxies. A deep all-sky SKA survey will detect hydrogen emission from galaxies out to redshifts of about 1.5, at a distance of about 9 billion light years, or at a time when the Universe was about 4.7 billion years old. The galaxy observations will be “sliced” in different redshift (time) intervals and hence reveal a comprehensive picture of the Universe's history.
The same data set will give us unique new information about the evolution of galaxies. How the hydrogen gas was concentrated to form galaxies, how fast it was transformed into stars, and how much gas did galaxies acquire during their lifetime from intergalactic space and by merging with other galaxies? Present-day telescopes have difficulty in detecting intergalactic hydrogen clouds with no star formation activity and distant dwarf galaxies, but these sorts of radio sources will be easily detectable by the SKA. The hydrogen survey will simultaneously give us the synchrotron radiation intensity of all galaxies which is a measure of their star-formation rate and magnetic field strength.
- Tests of General Relativity and detection of gravitational waves with pulsars and black holes
The radio-astronomical discovery of pulsars and the indirect detection of gravitational waves from a pulsar-star binary system were rewarded with two Nobel prizes for physics. Pulsars are precise clocks and can be used for further experiments in fundamental physics and astrophysics. Einstein’s Theory of General Relativity has precisely predicted the outcome of every test experiment so far. However, no tests in the strong gravitational field around black holes have yet been made. The SKA will search for a radio pulsar orbiting around a black hole (Fig. 2), the remnants from the supernova explosions of two massive stars in a binary system, measure time delays in extremely curved space with much higher precision than with laboratory experiments and hence probe the limits of General Relativity (Lorimer & Kramer 2004).
Regular high-precision observations with the SKA of a network of pulsars with periods of milliseconds opens the way to detect gravitational waves with wavelengths of many light years, as expected for example from two massive black holes orbiting each other with a period of a few years resulting from galaxy mergers in the early Universe. When such a gravitational wave passes by the Earth, the nearby space-time changes slightly at a frequency of a few nHz (about 1 oscillation per 30 years). The wave can be detected as apparent systematic delays and advances of the pulsar clocks in particular directions relative to the wave propagation on the sky.
We expect that more than 20,000 new pulsars will be detected with the SKA, compared to about 2000 known today. Almost all pulsars in the Milky Way (Fig. 3) and several 100 bright pulsars in nearby galaxies will become observable.
- Origin and evolution of cosmic magnetism
Electromagnetism is one of the fundamental forces, but little is known about its role in the Universe. Large-scale electric fields induce electric currents and are unstable, whereas magnetic fields can exist over long times because, mysteriously, single magnetic charges (monopoles) are missing in the Universe. Data suggest that all interstellar and probably intergalactic space is permeated by magnetic fields, but these are extremely hard to observe. Radio waves provide two tools: synchrotron radiation emitted by cosmic-ray electrons spiraling around magnetic field lines with almost the speed of light, and Faraday rotation of the polarization plane when a polarized (synchrotron) radio wave passes through a medium with magnetic fields and thermal electrons. Both methods have been applied to reveal the large-scale magnetic fields in our Milky Way, nearby spiral galaxies (Fig. 4), and in galaxy clusters, which are probably amplified and maintained by dynamo action , but little is known about magnetic fields in the intergalactic medium (Wielebinski & Beck 2005). Furthermore, the origin and evolution of magnetic fields is still unknown. The first "seed" fields may originate in the very young Universe or may have been ejected from the first quasars, stars, or supernovae.
The SKA will measure the Faraday rotation towards several tens of million polarized background sources (mostly quasars), allowing us to derive the magnetic field structures and strengths of the intervening objects, such as, the Milky Way, distant spiral galaxies, clusters of galaxies, and in intergalactic space.
- The cradle of life
The presence of life on other planets is a fundamental issue for astronomy and biology. The SKA will contribute to this question in several ways. Firstly, it will be able to detect the thermal radio emission from centimeter-sized "pebbles" in protoplanetary systems (Fig. 5) which are thought to be the first step in assembling Earth-like planets. The SKA will allow us to detect a protoplanet separated from the central star by spacings of order the Sun-Earth separation out to distances of about 3000 light years.
Biomolecules are observable in the radio range, for example, "cold sugar" glycolaldehyde (CH2OHCHO) which has several lines between 13 and 22 GHz. Prebiotic chemistry - the formation of the molecular building blocks necessary for the creation of life - occurs in interstellar clouds long before that cloud collapses to form a new solar system with planets.
Finally, the SETI (Search for Extra Terrestrial Intelligence) project (see Tarter 2001 for a review) will use the SKA to find hints of technological activities. Ionospheric radar experiments similar to those on Earth will be detectable out to several thousand light years, and Arecibo-type radar beams, like those that we use to map our neighbor planets in the solar system, out to as far as a few ten thousand light years. SETI will also search for such artificial signals superimposed onto natural signals from other objects.
Core science drivers
From the five Key Science Projects (see above) two major science goals have been identified that drive the technical specifications for the first phase (SKA1):
- Origins: Understanding the history and role of neutral hydrogen in the Universe from the dark ages to the present-day
- Fundamental Physics: Detecting and timing binary pulsars and spin-stable millisecond pulsars in order to test theories of gravity.
Exploration of the Unknown
While the experiments described above are exciting science, the history of science tells us that many of the greatest discoveries happen unexpectedly and reveal objects which are completely different from those which had been envisaged during the planning phase of a new-generation telescope. For example, the serendipitous discovery of pulsars was made with a low-frequency telescope at Cambridge/UK that had been designed to measure the effects of the ionized interplanetary medium on radio waves. The unique sensitivity of the SKA will certainly reveal new classes of cosmic objects which are totally beyond our present imagination. We are looking forward to such surprises.
Similar to present-day radio interferometers, like the Very Large Array (USA), the Westerbork Synthesis Radio Telescope (Netherlands), the Australia Telescope Compact Array and the Allen Telescope Array (USA), the SKA will consist of many antennas which are spread over a large area. The resolving power is proportional to the frequency and to the largest baseline between the outermost antennas and hence is much higher than for single dish telescopes. The signals are combined in a central computer (correlator). While the radio images from present-day interferometric telescopes are generally produced offline at the observer's institute, the enormous data rates of the SKA will demand online image production with automatic software pipelines.
With a collecting area of about one square kilometer, the SKA will be about ten times more sensitive than the largest single dish telescope (305 m diameter) at Arecibo (Puerto Rico) , and fifty times more sensitive than the currently most powerful interferometer, the Jansky Very Large Array (VLA, at Socorro/USA) . The SKA will continuously cover most of the frequency range accessible from ground, from 50 MHz to 3 GHz (corresponding to wavelengths of 10 cm to 6 m) in the first phase, later to be extended to at least 14 GHz (2 cm). The third major improvement is the enormously wide field of view, ranging from about 40 square degrees at 50 MHz to about 18 square degree at 1.4 GHz. The speed to survey a large part of the sky, particularly at the lower frequencies, will hence be ten thousand to a million times faster than what is possible today.
To meet these ambitious specifications and keep the cost to a level the international community can support, planning and construction of the SKA requires many technological innovations such as light and low-cost antennas, detector arrays with a wide field of view, low-noise amplifiers, high-capacity data transfer, high-speed parallel-processing computers and high-capacity data storage units. The realization needs multifold innovative solutions which will soon find their way into general communication technology.
The frequency range spanning more than two decades cannot be realized with one single antenna design, so this will be achieved with a combination of two types of antennas for the low and mid-frequency ranges:
1. SKA-Low: An aperture array of simple dipole antennas with wide spacings (a "sparse aperture array") for the low-frequency range (about 50-350 MHz) (Fig. 6). This is a software telescope with no moving parts, steered solely by electronic phase delays. It has a large field of view and can observe towards several directions simultaneously.
2. SKA-Mid: An array of several thousand parabolic dishes of 15 meters diameter each for the medium frequency range (about 350 MHz - 14 GHz), each equipped with wide-bandwidth single-pixel "feeds" (Fig. 7). The surface accuracy of these dishes will allow a later receiver upgrade to higher frequencies. The central region will contain about 50% of the total collecting area and comprise (1) separate core stations of 5 km diameter each for the dish antennas and the two types of aperture arrays (Fig. 6), (2) the mid-region out to about 180 km radius from the core with dish and aperture array antennas aggregated into "stations" distributed on a spiral arm pattern, and (3) "remote" stations with about 20 dish antennas each out to distances of at least 3000 km and located on continuations of the spiral arm pattern. The overall extent of the array determines the angular resolution, which will be about 0.03 seconds of arc at 350 MHz and 0.001 seconds of arc at 10 GHz.
Dense aperture arrays comprise up to millions of receiving elements in planar arrays on the ground which can be phased together to point in any direction on the sky. Due to the large reception pattern of the basic elements, the field of view can be up to 250 square degrees. Dense aperture arrays have been the subject of a European Commission-funded design study named SKA Design Study (SKADS) which has resulted in a prototype array of 140 square meters area (EMBRACE) .
The technology of dense phased-array feeds (PAF) can also be adapted to the focal plane of parabolic dishes. Such a "radio camera" is composed of many elements (pixels) which are controlled and combined electronically. This allows the dishes to observe over a far wider field of view than when using a classical single-pixel feed. Prototypes of such wide-field cameras are presently constructed in Australia (ASKAP) , the Netherlands (APERTIF) and in Canada (AFAD) . The first of the 36 dish antennas (12 meter size) of ASKAP in Western Australia have already been equipped with PAF prototypes.
As an "Advanced Instrumentation Programme" for the full SKA (SKA2), two additional technologies for substantially enhancing the field of view in the medium frequency range (about 1-2 GHz) are under rapid development: aperture arrays with dense spacings for and phased array feeds for the parabolic dishes for ASKAP. The AAMID consortium, working on the Mid Frequency Aperture Array (MFAA), aims to demonstrate the feasibility, competitiveness and cost-effectiveness of this technology for SKA2. The key advantage of AAs (Aperture Arrays) is the capability of realising a very large FoV (Field of View) and sensitivity, which results in an unsurpassed survey speed. Furthermore, AAs are capable of generating multiple independent FoVs, enhancing the efficiency of the system. The consortium is currently preparing for the System Requirements Review, which is planned in April 2016. Also, the preparations for a science capable demonstrator, to be located on the South African SKA site, have started.
Technical developments around the world are being coordinated by the SKA Science and Engineering Committee and its executive arm, the SKA Project Office. The technical work itself is funded from national and regional sources, and is being carried out via a series of verification programs. The global coordination was supported by funds from the European Commission under a program called PrepSKA, the Preparatory Phase for SKA , whose primary goals were to provide a costed system design and an implementation plan for the telescope by 2012.
Precursor and Pathfinder Telescopes
A number of telescopes provide examples of low frequency arrays, such as the European LOFAR (Low Frequency Array) telescope, with its core in the Netherlands , the MWA (Murchison Widefield Array) in Australia , PAPER (Precision Array to Probe the Epoch of Reionization), also in Australia , and the LWA (Long Wavelength Array) in the USA . All these long wavelength telescopes are software telescopes steered by electronic phase delays ("phased aperture array").
The first LOFAR stations saw "first light" in 2007 in the frequency band 10-80 MHz and in 2009 in the frequency band 110-240 MHz (Fig. 8). Full operation of LOFAR with 38 Dutch and 8 stations in other European countries started in 2013.
An array of dishes with a single-pixel feed is operating in the USA (Allen Telescope Array, ATA) .
The MeerKAT array of dishes with single-pixel feeds is under development in South Africa (MeerKAT) . The first 12 m prototype dish of the MeerKAT array was completed in 2009. Array Release 1 of the MeerKAT system with six dishes will be ready for early science by July 2016. A set of 16 dishes is expected to be up and running by July 2016.
The Australian SKA Pathfinder (ASKAP) telescope is CSIRO's innovative new radio telescope . ASKAP is equipped with innovative Phased Array Feed (PAF) receivers designed and built by CSIRO. PAF receivers provide multi-pixel images of the sky, allowing it to survey large areas of the sky quickly. This 'radio camera' is a vast improvement on existing radio telescopes. In 2015, the ASKAP Commissioning and Early Science team has produced many images of the radio sky using the 6-antenna test array equipped with Mk I phased array feed (PAF) receivers. This array, known as BETA (Boolardy Engineering Test Array) has been an invaluable test platform providing vital insights into the intricate workings of a new telescope, testing the effectiveness of the PAF receivers, and giving an early indication of what the full ASKAP telescope will be able to achieve. Starting in 2016, early science observations will be carried out using ASKAP-12 (an array of twelve ASKAP antennas fitted with Mk II PAFs) in parallel to the deployment of additional receivers that will build out the telescope to the full 36 antenna array.
To summarize the various international activities: ASKAP, MWA and MeerKAT are SKA Precursor telescopes and are located on the two candidate sites, Australia and South Africa, respectively. SKA Pathfinder telescopes develop technology or science projects related to the SKA, such as LOFAR, EMBRACE, APERTIF, ATA, LWA, the Arecibo dish and the VLA dish array. SKA Design Studies include the SKA Design Study (SKADS, Europe), the SKA Program (Canada) and the Technology Development Project (TDP, USA).
To obtain radio images, the data from all stations have to be transmitted to a central computer and processed online. Compared to LOFAR with a data rate of about 300 Gigabits per second and a central processing power of 27 Tflops, the SKA will produce much more data and need much more processing power - by a factor of at least one hundred. Following "Moore’s law" of increasing computing power, a processor with sufficient power should be available by the end of this decade. The energy consumption for the computers and cooling will be tens of MegaWatts.
Timeline and site
In 2011, the SKA Organisation was founded, with presently ten members (Australia, Canada, China, India, Italy, the Netherlands, New Zealand, South Africa, Sweden and the United Kingdom). The German government decided to cease membership by 2015.
In 2012, the Members of the SKA Organisation agreed on a dual site solution for the SKA with two candidate sites fulfilling the scientific and logistical requirements: Southern Africa, with a core in the Karoo desert, and in Western Australia.
Construction of the SKA is planned to start in 2018. In the first phase (until 2020) about 10% of the SKA with two frequency bands for SKA-Mid will be erected (SKA Phase 1, SKA1), SKA Phase 2 (SKA2) with full sensitivity and full frequency coverage until about 2025. The costs of the SKA1 are 650 million €, to be shared among the countries of the worldwide collaboration.
Building of SKA1
In March 2015 the SKA Board adopted the following components as the updated SKA1 Baseline Design to be built within the cost cap of €650M:
• SKA1-Mid in South Africa, incorporating MeerKAT. ~200 SKA1 dishes (including the 64 MeerKAT dishes) should be constructed with a target of delivering baseline lengths of 120-150km. Receiver bands 2 (0.95-1.76 GHz), 5 (4.6-13.8 GHz) and 1 (0.35-1.05 GHz) should be constructed for all SKA1-Mid dishes, with their priority order as written. Capability to form and process pulsar search beams should be delivered.
• SKA1-Low in Australia. ~130,000 low-frequency dipoles should be deployed. The array should cover the frequency range 50-350 MHz with baseline lengths of ~80km. The inclusion of a pulsar search capability for SKA1-Low should be actively explored.
• The Board approves funding, with Australia’s agreement, for the operations of ASKAP as an integral component of SKA1. This would enable ASKAP to provide SKA1 with an early survey capability and also serve as a platform for the development of next-generation PAFs. (The originally planned SKA1-Survey in Australia should be deferred.)
In addition, an SKA Phased Array Feed (PAF) development programme will be initiated as part of a broader Advanced Instrumentation Programme.
Further reading on the SKA
The Square Kilometre Array, download from: http://www.skatelescope.org/wp-content/uploads/2011/03/SKA-Brochure.pdf
C. Carilli and S. Rawlings: Science with the Square Kilometre Array, New Astronomy Reviews, vol. 48, Elsevier, Amsterdam (2004)
P.E. Dewdney, P.J. Hall, R.T. Schilizzi and T.J.L.W. Lazio: The Square Kilometre Array, Proceedings of the IEEE, 97, 1482-1496 (2009)
P. Hall: The SKA: an Engineering Perspective, Experimental Astronomy, vol. 17, Springer, Berlin (2005)
J. Lazio, M. Kramer and B. Gaensler: Tuning in to the Universe, Sky & Telescope 7/2008, p.20
Advancing Astrophysics with the SKA (AASKA14): ~170 contributions published in Proceedings of Science (PoS) (2015)
B.F. Burke, F. Graham-Smith: An Introduction to Radio Astronomy, 3rd ed., Cambridge University Press (2009)
R. Durrer: The Cosmic Microwave Background, Cambridge University Press (2008)
X. Fan, C.L. Carilli and B. Keating: Observational Constraints on Cosmic Reionization, Annual Reviews in Astronomy & Astrophysics, 44, 415-462 (2006)
J.A. Frieman, M.S. Turner and D. Huterer: Dark Energy and the Accelerating Universe, Annual Reviews in Astronomy & Astrophysics, 46, 385-432 (2008)
D.R. Lorimer and M. Kramer: Handbook of Pulsar Astronomy, Cambridge University Press (2004)
J. Tarter: The Search for Extraterrestrial Intelligence (SETI), Annual Reviews in Astronomy & Astrophysics, 39, 511-548 (2001)
R. Wielebinski and R. Beck (eds.): Cosmic Magnetic Fields, Springer, Berlin (2005)
T.L. Wilson, K. Rohlfs and S. Hüttemeister: Tools of Radio Astronomy, 5th ed., Springer, Berlin (2009) | http://www.scholarpedia.org/article/Square_kilometre_array |
4.09375 | Block size (cryptography)
|This article does not cite any sources. (June 2009)|
In modern cryptography, symmetric key ciphers are generally divided into stream ciphers and block ciphers. Block ciphers operate on a fixed length string of bits. The length of this bit string is the block size. Both the input (plaintext) and output (ciphertext) are the same length; the output cannot be shorter than the input — this follows logically from the Pigeonhole principle and the fact that the cipher must be reversible — and it is undesirable for the output to be longer than the input.
Until the announcement of NIST's AES contest, the majority of block ciphers followed the example of the DES in using a block size of 64 bits (8 bytes). However the Birthday paradox tells us that after accumulating a number of blocks equal to the square root of the total number possible, there will be an approximately 50% chance of two or more being the same, which would start to leak information about the message contents. Thus even when used with a proper encryption mode (e.g. CBC or OFB), only 232 x 8 B = 32 GB of data can be safely sent under one key. In practice a greater margin of security is desired, restricting a single key to the encryption of much less data - say a few hundred megabytes. Once that seemed like a fair amount of data, but today it is easily exceeded. If the cipher mode does not properly randomise the input, the limit is even lower.
Consequently, AES candidates were required to support a block length of 128 bits (16 bytes). This should be acceptable for up to 264 x 16 B = 256 Exabytes of data, and should suffice for quite a few years to come. The winner of the AES contest, Rijndael, supports block and key sizes of 128, 192, and 256 bits, but in AES the block size is always 128 bits. The extra block sizes were not adopted by the AES standard. | https://en.wikipedia.org/wiki/Block_size_(cryptography) |
4.0625 | Glorious Revolution in Scotland
The Glorious Revolution in Scotland was part of a wider change of regime, known as the Glorious Revolution or Revolution of 1688, in the British kingdoms of the Stuart monarchy in 1688–89. It began in England and saw the removal of the Catholic James VII of Scotland and II of England from the thrones of England, Scotland and Ireland and his replacement with his Protestant daughter Mary and her husband William of Orange.
After the Restoration of the monarchy in 1660 in the person of Charles II, Scotland was ruled from London through a series of commissioners. The reintroduction of episcopacy led to divisions in the church as some Presbyterians began to attend separate conventicles. The Catholicism of Charles's heir, James, Duke of Albany and of York, alienated some support, but he built up a following among some of the Highland clans. After his accession in 1685 attempts at invasion by his opponents failed, but the birth of an heir, Prince James, prompted English politicians to call for support from William of Orange, and after a major invasion from the Netherlands, James fled to France. Scotland had little option but to accept a change of monarch and a Presbyterian-dominated convention offered the crown of Scotland to William and Mary. Episcopacy was abolished and the Whigs became dominant in politics. There were a series of Jacobite risings between 1689 and 1746 in favour of James and his heirs. As a result of the Revolution, Scotland was drawn into major international wars and ultimately into full union with England in 1707.
In 1638 the Scots had rebelled against the religious policies of Charles I, established a national Covenant and abolished episcopacy. During the 1650s Scotland had been militarily defeated, occupied and for a short time annexed to the English Commonwealth, under the leadership of Oliver Cromwell. The Restoration of the monarchy in England in 1660 meant a parallel restoration in Scotland as a fait accompli, with the Scots in a very weak bargaining position. In the event Scotland regained its system of law, parliament and kirk, but also the Committee of the Articles (through which the crown controlled parliamentary business), bishops. They also had a king in Charles II who did not visit the country and ruled largely without reference to Parliament through a series of commissioners. These began with John Middleton and ended with the king's brother and heir, James, Duke of York (known in Scotland as the Duke of Albany), who effectively ran a small Scottish court at Holyrood Palace.
Church ministers were forced to accept the restoration of episcopacy or lose their livings. Up to a third, at least 270, of the ministry refused. Many ministers chose voluntarily to abandon their own parishes rather than wait to be forced out by the government. Most of the vacancies occurred in the south-west of Scotland, an area particularly strong in its Covenanting sympathies. Abandoning the official church, many of the people here began to attend illegal field assemblies led by excluded ministers, known as conventicles. They became known after one of their leaders as the Cameronians. Official attempts to suppress these led to a rising in 1679, defeated by James, Duke of Monmouth, the King's illegitimate son, at the Battle of Bothwell Bridge. In the early 1680s a more intense phase of persecution began, in what was later to be known in Protestant historiography as "the Killing Time", with dissenters summarily executed by the dragoons of James Graham, Laird of Claverhouse or sentenced to transportation or death by Sir George Mackenzie, the Lord Advocate.
In England, the Exclusion crisis of 1678–81 divided political society into Whigs (given their name after the Scottish Whigamores), who attempted, unsuccessfully, to exclude the openly Catholic Duke of Albany from the succession, and the Tories, who opposed them. Similar divisions began to emerge in Scottish political life, but there was little organised opposition to the succession and James' rights as heir received explicit recognition when the Scottish Parliament passed a Succession Act in 1681. Charles and James acted against Archibald Campbell, 9th Earl of Argyll, whose feudal rights in the south-west Highlands made him one of the most powerful figures in the kingdom. His rights were eroded in favour of other families and James may have been consciously building up his own following in the region. Argyll was eventually tried and fled to the Dutch court, which became the focus of both Scottish and English political dissidents and exiles. These included Scottish peer Lord George Melville, who was implicated in the Rye House Plot, an alleged attempt to assassinate Charles and James in 1683.
Deposition of James VII
Charles died in 1685 and his brother succeeded him as James VII of Scotland (and II of England). James put Catholics in key positions in the government and even attendance at a conventicle was made punishable by death. He disregarded parliament, purged the Council and forced through religious toleration for Roman Catholics, alienating his Protestant subjects. The failure of an invasion, led by the Earl of Argyll and timed to co-ordinate with the Duke of Monmouth's rebellion in England, demonstrated the strength of the regime. Argyll was unable to raise a sufficient force to threaten the regime and was soon captured and executed.
It was believed that the king would be succeeded by his daughter Mary, a Protestant and the wife of William of Orange, Stadtholder of the main provinces of the Netherlands, but when in 1688 James produced a male heir, James Francis Edward Stuart, it was clear that his policies would outlive him. An invitation by seven leading Englishmen prompted William to launch an invasion, landing in England with 14,000 men on 5 November. In Edinburgh there were rumours of Orange plots and on 10 December the Lord Chancellor of Scotland, the Earl of Perth, quit the capital for Drummond Castle, planning an abortive escape to Ireland (he was later captured as he embarked for France). As rioters approached Holyrood Abbey they were fired on by soldiers, resulting in some deaths. The city guard was called out, but the Abbey was stormed by a large mob. The Catholic furnishings, placed there when it was restored as a chapel for James, were torn down and the tombs of the Stuart kings desecrated. A crowd of students burnt the Pope in effigy and took down the heads of executed Covenanters that were hanging above the city gates.
The crisis was resolved when James fled from England on 23 December, leading to the almost bloodless revolution. Although there had been no significant Scottish involvement in the coup, most members of the Scottish Privy Council went to London to offer their services to William. As a result the Revolution in Scotland was not carried out by opponents of the existing regime, but by its agents, who were keen to preserve their offices. I. B. Cowen described these as "reluctant revolutionaries". In contrast Tim Harris argues that there was a lack of popular support for James' regime and that William's political support grew as the crisis unfolded in a similar way to England. On 7 January 1689 members of the Scottish Privy Council asked William to take over the responsibilities of government in Scotland.
Convention of Estates
William called a Scottish Convention, which convened on 14 March in Edinburgh. Initially William's supporters did not have a clear advantage and the Marquis of Hamilton, chosen by William to represent him, only gained the presidency over the Marquess of Atholl, who was associated with James, by a narrow margin. The faction that supported James, including many Episcopalians and led by figures including John Paterson, the Archbishop of Glasgow, were divided by James' previous attempts to achieve tolerance for Roman Catholics. A letter from James, received on 16 March, contained a threat to punish all who rebelled against him and declared the assembly illegal. This resulted in his followers abandoning the Convention, leaving the Williamites largely unopposed. William's supporters had control of the burgh, but Edinburgh Castle, with its formidable arsenal, was held by the Catholic Earl of Gordon. With Dundee raising troops in the Highlands, the convention met in a highly charged political atmosphere, behind closed doors and guarded by some 1,000 Cameronians.
On 4 April, with only five dissenting votes, the Convention formulated two documents, the Claim of Right and the Articles of Grievances. These suggested that James had forfeited the crown by his actions (in contrast to England, which relied on the legal fiction of an abdication) and offered it to William and Mary. On 11 May they accepted the Crown of Scotland as co-regents, as William II and Mary II. The principles of the two documents were that no Roman Catholic could hold the crown or any other office, that the royal prerogative could not override the law, that parliament should meet frequently and that it should be able to debate freely (that is without the interference of the Committee of Articles) and that there could only be taxation with the consent of parliament. They also condemned episcopacy as an "insupportable grievance and trouble to this nation". A proposal for union between the kingdoms was discussed, but dropped because of opposition from the English parliament. As in England, the convention was then converted into a regular parliament on 5 June 1689.
In the view of the Convention, William had accepted the crown on the basis of the articles and the claim, but he did not agree to this, arguing that he was only constrained by his oath to uphold "true religion" and to maintain a balance between "lawes and constitutiones receaved in this realm" and the "just privileges of the Crown" in Scotland, none of which were clearly defined. Neither did William accept the Scottish Parliament's interpretation of its constitutional position as the primary political institution in the kingdom, leading to a series of disputes between the Parliament in Edinburgh and the government in London.
Leading figures in the parliament included political rivals Melville, who had returned from exile, and a former servant of James VII's regime, John Dalrymple, 1st Earl of Stair. In 1689 Melville was made Earl of Melville and sole Secretary of State over Scotland and Stair was made Lord Advocate. In 1691 Stair was appointed as the joint Secretary of State. The first session of parliament deteriorated into a stalemate over the constitutional position. Although William had been able to appoint ministers, parliament withheld taxation and refused to accept his right to nominate to judicial offices, meaning that the law courts remained closed. The parliament passed a series of acts, but William refused to give royal assent. The two major issues of contention were episcopacy and the committee of articles. The result was the emergence of an organised opposition, known as "the Club". With most of its support among the shire members, it had a theoretical 75 of the 125 parliamentary votes. The court conceded over the issue episcopacy in July 1689, but continued to resist over the Committee of the Articles. Soon after the news of the defeat of Williamite forces at Battle of Killiecrankie parliament was prorogued on 2 August.
Against the background of James VII's invasion of Ireland, the possibility of an Irish invasion of Scotland and continued pockets of resistance in the Highlands, parliament met again in April 1690. The stalemate was broken by the discovery of the Montgomery Plot. Sir James Montgomery had been a major supporter of William's cause in the Convention, but had been frustrated when he was only offered a minor office in the government. He entered into secret negotiations with extreme Presbyterians, Episcopalian magnates and Jacobites. The plot involved part of the Club and some conservative magnates, including the Duke of Queensbury. In the resulting panic Melville conceded over the Committee of the Articles, which was agreed on 8 May. A series of agreements were then made between the court and parliament, with an act abolishing episcopacy and a grant of supply for the king, both agreed on 7 June. The constitutional settlement that emerged in parliament during the 1689 and 1690 sessions was less radical than that arrived at in 1641 as William and Mary retained important prerogative powers, particularly the right to summon, prorogue and dissolve parliament, allowing William to keep the same parliament until his death in 1702, but parliament had made considerable gains towards independence and would now be much more difficult to manage from the court. On 19 June the parliament exercised its new found independence by passing an act that abolished lay patronage in the kirk, by which local landholders or heritors had the right to appoint ministers to their parishes.
The General Assembly of the kirk did not meet until November 1690. In the months between the fall of the Stuart regime and its convention, there were a series of "ramblings" by which bands of Cameronians ejected over 200 conformist and Episcopalian ministers from their livings. As a result only 180 ministers and elders attended, all from south of the River Tay, where Presbyternian sympathies were strongest. In the second half of 1690 182 ministers were deprived for refusing to say prayers for William and Mary, turning the restoration of Presbyterianism into a militant purge. Two commissions were created, one for south and one for north of the Tay. Over the next 25 years they would remove almost two-thirds of all ministers. The General Assembly of 1692 refused to reinstate even those Episcopalian ministers who pledged to accept Presbyterianism. As a result many presbyteries were left with few or no parish clergy. However, the king was more tolerant than the kirk tended to be and issued two acts of indulgence in 1693 and 1695, allowing those who accepted him as king to return to the church. Around a hundred clergy took advantage of the offer. All but the hardened Jacobites would be given toleration in 1707, leaving only a small remnant of Jacobite Episcopalians. The final settlement was closer to the position of 1592 than the more radical position of 1649 and despite frequent statements that the kirk was independent of the state the relationship remained ambiguous. Although lay patronage was in theory abolished, heritors and elders still had the right to nominate candidates for their parishes, who could then be "called" by the congregation.
Although William's supporters dominated the government and parliament, there remained a significant following for James, particularly in the Highlands. His cause, which became known as Jacobitism, from the Latin (Jacobus) for James, led to a series of risings. An initial Jacobite military attempt was led by John Graham, now Viscount Dundee. His forces, almost all Highlanders, defeated William's forces at the Battle of Killiecrankie in 1689, but they took heavy losses and Dundee was slain in the fighting. Without his leadership the Jacobite army was soon defeated at the Battle of Dunkeld, by a newly raised government regiment of Cameronians. The last land forces of the Jacobites were defeated at the Battle of Cromdale in Strathspey on 1 May 1690 and Gordon surrendered Edinburgh Castle on 17 June. The complete defeat of James's cause in Ireland by forces under William at the Battle of Aughrim (1691), ended the first phase of the Jacobite military effort.
Massacre of Glencoe
In the aftermath of the Jacobite defeat, on 13 February 1692, in an incident known as the Massacre of Glencoe, 38 members of the Clan MacDonald of Glencoe were killed by members of the Earl of Argyll's Regiment of Foot, who had accepted their hospitality, on the grounds that they had not been prompt in pledging allegiance to the new monarchs. Another forty women and children died of exposure after their homes were burned. The brutality of the incident was embarrassing for the new government and after a subsequent inquiry Dalrymple, who had ordered the massacre, was forced to resign. The massacre helped create greater sympathy for the Stuart cause and may have contributed to later support for Jacobite risings.
The Glorious Revolution led to the dominance of the Presbyterians in the Church of Scotland and of the Whigs in politics. The Whig dominance continued into the mid-eighteenth century, but the Revolution decisively determined the future structure of the kirk. In the short term the removal of so many Episcopalian ministers probably made the impact of the famines of the seven ill years more severe, as they were not able to operate the system of parish poor relief. The revolution also provided a political and dynastic dimension to cultural and religious divisions, particularly between the largely Episcopalian Highlands and the more Presbyterian Lowlands. This helped to make the Scottish Highlands the main focus of Jacobite resistance to the Williamite regime, resulting in a series of military adventures, of which the most threatening were those of 1715 and 1745. The revolution also led to Scotland's involvement in large scale European wars from 1689–96 and 1702–13, resulting in heavy demands in men and taxation. It led ultimately to the Acts of Union that created the Kingdom of Great Britain, as the danger of a divided succession between Scotland and England drove the need for a lasting resolution.
- J. D. Mackie, B. Lenman and G. Parker, A History of Scotland (London: Penguin, 1991), ISBN 0140136495, pp. 204–6.
- Mackie et al, History of Scotland, pp. 225–6.
- Mackie et al, History of Scotland, pp. 241–5.
- Mackie et al, History of Scotland, p. 239.
- Mackie et al, History of Scotland, pp. 231–4.
- R. Mitchison, A History of Scotland (London: Routledge, 3rd edn., 2002), ISBN 0415278805, p. 253.
- Mackie et al, History of Scotland, p. 238.
- Mackie et al, History of Scotland, p. 241.
- P. Langford, The Eighteenth Century, 1688–1815 (Oxford: Oxford University Press, 1976), p. 47.
- Mitchison, Lordship to Patronage, p. 113.
- K. Zickermann, Across the German Sea: Early Modern Scottish Connections with the Wider Elbe-Weser Region (Brill, 2013), ISBN 9004249583, p. 202.
- M. Lynch, Scotland: A New History (London: Pimlico, 1992), ISBN 0712698930, p. 297.
- Lynch, Scotland: A New History, p. 300.
- C. Jackson, Restoration Scotland, 1660–1690: Royalist Politics, Religion and Ideas (Boydell Press, 2003), ISBN 0851159303, p. 191.
- T. Harris, Revolution: The Great Crisis of the British Monarchy 1685–1720 (London: Penguin, 2006), ISBN 0141016523, pp. 380–390.
- R. H. Fritze and W. B. Robison, eds, Historical Dictionary of Stuart England, 1603–1689 (London: Greenwood Publishing Group, 1996), ISBN 0313283915, p. 97.
- Jackson, Restoration Scotland, pp. 210–211
- Mitchison, Lordship to Patronage, pp. 118–19.
- Lynch, Scotland: A New History, p. 302.
- Lynch, Scotland: A New History, pp. 302–3.
- Lynch, Scotland: A New History, p. 305.
- Mackie et al, History of Scotland, pp. 252–3.
- J. L. Roberts, Clan, King, and Covenant: History of the Highland Clans from the Civil War to the Glencoe Massacre (Edinburgh: Edinburgh University Press, 2000), ISBN 0748613935, p. 214.
- Lynch, Scotland: A New History, p. 303.
- D. J. Patrick, "Unconventional procedure: Scottish electoral politics after the revolution", in K. M. Brown and A. J. Mann, eds, Parliament and Politics in Scotland, 1567–1707 (Edinburgh, 2005), pp. 208–44.
- Lynch, Scotland: A New History, p. 304.
- Mackie et al, History of Scotland, pp. 283–4.
- I. B. Cowen, "Church and state reformed?: the Glorious Revolution of 1688–9 in Scotland", in J. I. Israel, ed., The Anglo-Dutch Moment: Essays on the Glorious Revolution and Its World Impact (Cambridge: Cambridge University Press, 2003), ISBN 0521544068, p. 165.
- M. Pittock, Jacobitism (St. Martin's Press, 1998), ISBN 0312213069, p. 45.
- Mackie et al, History of Scotland, pp. 287–8.
- Lynch, Scotland: A New History, pp. 305–6.
- Mackie et al, History of Scotland, pp. 282–4.
- K. J. Cullen, Famine in Scotland: The "Ill Years" of the 1690s (Edinburgh University Press, 2010), ISBN 0748638873, p. 105.
- Mitchison, Lordship to Patronage, pp. 120–3.
- Mitchison, Lordship to Patronage, p. 129.
- Cowen, I. B., "Church and state reformed?: the Glorious Revolution of 1688–9 in Scotland", in J. I. Israel, ed., The Anglo-Dutch Moment: Essays on the Glorious Revolution and Its World Impact (Cambridge: Cambridge University Press, 2003), ISBN 0521544068
- Cullen, K. J., Famine in Scotland: The "Ill Years" of the 1690s (Edinburgh University Press, 2010), ISBN 0748638873.
- Fritze, R. H. and Robison, W. B., eds, Historical Dictionary of Stuart England, 1603–1689 (London: Greenwood Publishing Group, 1996), ISBN 0313283915.
- Harris, T., Revolution: The Great Crisis of the British Monarchy 1685–1720 (London: Penguin, 2006), ISBN 0141016523.
- Jackson, C., Restoration Scotland, 1660–1690: Royalist Politics, Religion and Ideas (Boydell Press, 2003), ISBN 0851159303.
- Langford, P., The Eighteenth Century, 1688–1815 (Oxford: Oxford University Press, 1976).
- Lynch, M., Scotland: A New History (London: Pimlico, 1992), ISBN 0712698930.
- Mackie, J. D., Lenman, B. and Parker, G., A History of Scotland (London: Penguin, 1991), ISBN 0140136495.
- Mitchison, R., A History of Scotland (London: Routledge, 3rd edn., 2002), ISBN 0415278805.
- Patrick, D. J., "Unconventional procedure: Scottish electoral politics after the revolution", in K. M. Brown and A. J. Mann, eds, Parliament and Politics in Scotland, 1567–1707 (Edinburgh, 2005).
- Pittock, M., Jacobitism (St. Martin's Press, 1998), ISBN 0312213069
- Roberts, J. L., Clan, King, and Covenant: History of the Highland Clans from the Civil War to the Glencoe Massacre (Edinburgh: Edinburgh University Press, 2000), ISBN 0748613935.
- Zickermann, K., Across the German Sea: Early Modern Scottish Connections with the Wider Elbe-Weser Region (Brill, 2013), ISBN 9004249583 | https://en.wikipedia.org/wiki/Glorious_Revolution_in_Scotland |
4.0625 | In the rainforests of South America, scientists have discovered a new genus and three new species of katydid with the highest ultrasonic calling songs ever recorded in the animal kingdom.
Katydids (bushcrickets) are insects known for their acoustic communication, with the male producing sound by rubbing its wings together (stridulation) to attract distant females for mating. But these newly discovered insects turn ultrasonic calling all the way up to 11 on the dial - males reach a frequency of a startling 150 kHz. For comparison, the calling frequencies used by most katydids range between 5 kHz and 30 kHz while nominal human hearing range ends at around 20 kHz.
Thus, the new genus has been named Supersonus. But that rapid wing movement comes at a price - reduced wing size means they can't fly. Speculation is that the adoption of extreme ultrasonic frequencies might play a role in avoiding predators, such as bats, and of course there is the possibility that evolution just got drunk and went on a random walk again. Bats can detect their prey’s movements using echolocation but can also eavesdrop and detect the calls of singing animals like katydids and frogs. Rainforest katydids have learned to avoid bats by reducing the time spent singing and by evolving an ear that can detect the ultrasonic echolocation calls of the bats. Although some bats can detect 150 kHz, by singing at extreme ultrasonic frequencies, the katydid calls degrade faster with distance so that a flying bat will find it harder to hear the signal.
Female Supersonus. Credit:
Dr. Fernando Montealegre-Z from the University of Lincoln said, “To call distant females, male katydids produce songs by ‘stridulation’ where one wing (the scraper) rubs against a row of ‘teeth’ on the other wing. The scraper is next to a vibrating drum that acts like a speaker. The forewings and drums are unusually reduced in size in the Supersonus species, yet they still manage to be highly ultrasonic and very loud.
“Using a combination of state-of-the-art technologies, we found that Supersonus creates a ‘closed box’ with its right wing in order to radiate sound. Human-made loud speakers also use this system to radiate sound. Large speakers radiate low frequencies, while small speakers emit high frequencies. So, these reduced wings are responsible for tuning their calling songs at such high frequencies.”
Dr. James Windmill, from the Centre of Ultrasonic Engineering, University of Strathclyde, added, “These insects can produce, and hear, loud ultrasonic calls in air. Understanding how nature’s systems do this can give us inspiration for our engineered ultrasonics.”
Citation: Sarria-S F. A., Morris G. K., Jackson J., Windmill J.F.C,&Montealegre-Z F. 2014. Shrinking wings for ultrasonic pitch production: hyperintense ultra-short-wavelength calls in a new genus of neotropical katydids (Orthoptera: Tettigoniidae). PLoS One June 5 2014 DOI:10.1371/journal.pone.0098708
- PHYSICAL SCIENCES
- EARTH SCIENCES
- LIFE SCIENCES
- SOCIAL SCIENCES
Subscribe to the newsletter
Stay in touch with the scientific world!
Know Science And Want To Write?
- Why This New "Planet X" Is No Threat To Earth :).
- Top Secret: On Confidentiality On Scientific Issues, Across The Ring And Across The Bedroom
- Would New Planet X Clear Its Orbit? - And Any Better Name Than "Planet Nine"?
- Double Dose Of Bad Earthquake News
- From The Great Wall To The Great Collider
- Quantum mechanics in 1834?
- The Greenhouse Effect Fallacy
- "For more on this:Did A Meteorite Kill A Bus Driver In India? and page they link to on the..."
- "Seems, the only confirmed case of someone injured by a meteorite was Ann Hodges who was hit by..."
- "Ask me on quora, visit my profile Robert Walker, and click message me. You can join quora easily..."
- "I didnt mean to waste your time and i apologize i was genuinely just asking wether it was real..."
- "Does anyone know if planet nine is one of the two planets Carlos de la Fuente Marcos predicted..."
- Spice of Life Can, Literally, Lead to Longer Life
- Keeping Babies Safe from ‘Tourniquet’ Hair
- Magnesium Matters, But You’re Already Getting Enough
- Here’s Why Surge in Hepatitis B Cases is No Surprise
- Whooping Cough Booster Declines Rapidly Over 4 Years
- Unapproved Stem Cell Therapies, a Public Health Menace
- New Iowa State research holds promise for diabetics with vitamin D deficiency
- Study shows promising safety results for anti-aging drug
- Characterizing the smell of death may help rescue workers at disaster sites
- Inland fisheries determined to surface as food powerhouse
- Tick genome reveals inner workings of a resilient blood-guzzler | http://www.science20.com/news_articles/and_the_new_winner_for_the_animal_kingdoms_highestpitch_love_call_is-138064 |
4.0625 | If you like us, please share us on social media, tell your friends, tell your professor or consider building or adopting a Wikitext for your course.
Boiling is the process by which liquids are heated beyond their "boiling point" and undergoes the change from the liquid phase to the gaseous phase. Boiling is the converse process of condensation, in which an element or molecule in it's gaseous phase is converted to a liquid.
An element or compound will boil after passing its respective boiling point, but there are more aspects to this process than just temperature. The system must be open to the atmosphere. If the system is completely enclosed, the liquid will reach its critical point. After a liquid has been heated past it's boiling point, the molecules will start to behave differently than in the liquid phase. They will take on a less organized structure, and the molecules will begin to enter the atmosphere in the form of a gas.
At the boiling point, the liquid begins its transition to the gaseous phase, and will actually hold its temperature at this point. For example, when water reaches 100 degrees centigrade, it will begin to boil to its vapor phase and the liquid that has yet to boil will hold at the constant 100 degrees. The water will be unable to change temperature until it has completely shifted from the liquid phase to the gaseous phase. The distinction in this case must also be made between boiling and evaporation. Evaporation is a similar process in which the liquid continuously changes to the gaseous phase, though it is a process that usually occurs at equilibrium, and at nowhere near the rate of boiling. More information can be found at enthalpy of vaporization.
The temperature at which a liquid will begin its transformation into a gas is dependent upon the pressure it is under. For instance, water boils at 100 degrees centigrade under a pressure of 1 atmosphere. If the pressure is lowered, the boiling point is actually lowered. This also means that an increase in pressure will increase the boiling point of the liquid. Since pressure can also be related to altitude, the farther away from sea level a system is, the lower the boiling point will be for any liquid within said system. This is to say that the boiling point of water is 100 degrees in San Francisco, while it will be a couple of degrees lower at Lake Tahoe where the altitude is significantly greater and as such, has less pressure on the system.
Water has been an excellent example so far, but in actuality isn't the best example for a wide array of liquid to gas transitions. For example, Nitrogen is a gas at room temperature and is most prevalent on Earth in this phase. This is because Nitrogen has a boiling point of roughly -196 degrees C. Only at temperatures below this will Nitrogen be found in it's liquid phase, and during its transition will keep the constant temperature as it changes phases. This property of keeping its temperature as a whole until all of the liquid has finished boiling allows it to be used in industrial properties as well as in some small scale home use.
Say you find yourself with some cream, sugar, vanilla flavoring, an industrial sized vat, and a hunger for ice cream all in your home. However, you don't have the ice or salt to help you in your task to make said ice cream. You remember reading this wiki, and the bit about liquid nitrogen, and run down to the local chemical supply store and pick up a small amount of liquid Nitrogen.
After obtaining all of these ingredients, you are ready to make the ice cream. Just combine the 3 main ingredients in your vat, grab a large wooden spoon to stir, and slowly start adding the liquid Nitrogen. Make sure to constantly stir, and add the Nitrogen slowly. In under five minutes you should have some completely edible ice cream. Make sure protective gear is worn at all times. This whole process is possible because the Nitrogen remains at -196 degrees during the entire time that it is boiling, and will reduce the temperature of it's surroundings, which in this case was the ice cream ingredients. Other elements normally in the gas phase at room temperature will have similar properties.
In quick summary, note that all of the elements found in the gaseous phase at standard temperature and pressure, will have a boiling point much less than 0 degrees centigrade. | http://chemwiki.ucdavis.edu/Physical_Chemistry/Physical_Properties_of_Matter/States_of_Matter/Phase_Transitions/Boiling |
4.0625 | The Cassini Division, a gap in the rings of Saturn, is caused by gravitational pull from Saturn’s moon Mimas. The moon’s gravity affects the tiny particles that make up the rings, creating what looks like empty space. Other divisions in Saturn’s rings are the result of similar interactions with the planet’s moons.Know More
The Cassini Division, at about 3,000 miles or 4,800 kilometers wide, is the largest of several gaps in the rings of Saturn. The area is not completely empty, but the scarcity of ring material makes it look like a dark, empty space.
The Cassini Division marks the division between what scientists call Saturn's A and B rings. Other divisions, created by the gravitational pull of other moons, exist farther out from the planet and are not as wide.Learn more about Astronomy
Scientists are not sure how or when Saturn got its rings, according to NASA. One theory states that comets, asteroids and meteoroids collided with Saturn’s moons, and they all shattered into pieces. The broken pieces then spread out and formed rings around Saturn. Another theory states that the rings formed from materials that were left over when Saturn formed.Full Answer >
According to NASA, scientists have many theories about Saturn's rings, but there is no proven explanation for the rings. The rings are made of debris trapped in the gravitational pull of Saturn, and there are multiple theories as to where this debris originated.Full Answer >
Giovanni Domenico Cassini was a 17th century astronomer and the first of the Cassini family of astronomers. He discovered four of the moons of Saturn and the makeup of the planet's rings.Full Answer >
Because weight is calculated based on gravitational pull, it is impractical to determine the weight of a planet. For example, an object weighing 500 pounds on Earth weighs 465 pounds on Saturn because of the planets' different gravitational pulls. Thus, planets are compared by calculating their mass, which remains constant.Full Answer > | http://www.ask.com/science/causes-cassini-division-saturn-s-rings-bddc82bfa64bf281 |
4.125 | The Warren Court refers to the Supreme Court of the United States between 1953 and 1969, when Earl Warren served as Chief Justice. Warren's predecessor Fred M. Vinson (b. 1890) had died on September 8, 1953 after 2,633 days in this position (see here).
Warren led a liberal majority that used judicial power in dramatic fashion, to the consternation of conservative opponents. The Warren Court expanded civil rights, civil liberties, judicial power, and the federal power in dramatic ways.
The court was both applauded and criticized for bringing an end to racial segregation in the United States, incorporating the Bill of Rights (i.e. including it in the 14th Amendment Due Process clause), and ending officially sanctioned voluntary prayer in public schools. The period is recognized as a high point in judicial power that has receded ever since, but with a substantial continuing impact.
Prominent members of the Court during the Warren era besides the Chief Justice included Justices William J. Brennan, Jr., William O. Douglas, Hugo Black, Felix Frankfurter, and John Marshall Harlan II.
- 1 Warren's leadership
- 2 Vision
- 3 Historically significant decisions
- 4 Warren's role
- 5 Associate justices of the Warren Court
- 6 See also
- 7 References
- 8 Further reading
- 9 External links
One of the primary factors in Warren's leadership was his political background, having served two and a half terms as Governor of California (1943–1953) and experience as the Republican candidate for vice president in 1948 (as running mate of Thomas E. Dewey). Warren brought a strong belief in the remedial power of law. According to historian Bernard Schwart, Warren's view of the law was pragmatic, seeing it as an instrument for obtaining equity and fairness. Schwartz argues that Warren's approach was most effective "when the political institutions had defaulted on their responsibility to try to address problems such as segregation and reapportionment and cases where the constitutional rights of defendants were abused."
A related component of Warren's leadership was his focus on broad ethical principles, rather than narrower interpretative structures. Describing the latter as "conventional reasoning patterns," Professor Mark Tushnet suggests Warren often disregarded these in groundbreaking cases such as Brown v. Board of Education, Reynolds v. Sims and Miranda v. Arizona, where such traditional sources of precedent were stacked against him. Tushnet suggests Warren's principles "were philosophical, political, and intuitive, not legal in the conventional technical sense."
Warren's leadership was characterized by remarkable consensus on the court, particularly in some of the most controversial cases. These included Brown v. Board of Education, Gideon v. Wainwright, and Cooper v. Aaron, which were unanimously decided, as well as Abington School District v. Schempp and Engel v. Vitale, each striking down religious recitations in schools with only one dissent. In an unusual action, the decision in Cooper was personally signed by all nine justices, with the three new members of the Court adding that they supported and would have joined the Court's decision in Brown v. Board.
Fallon says that, "Some thrilled to the approach of the Warren Court. Many law professors were perplexed, often sympathetic to the Court's results but skeptical of the soundness of its constitutional reasoning. And some of course were horrified."
Professor John Hart Ely in his book Democracy and Distrust famously characterized the Warren Court as a "Carolene Products Court." This referred to the famous Footnote Four in United States v. Carolene Products in which the Supreme Court had suggested that heightened judicial scrutiny might be appropriate in three types of cases:
- those where a law was challenged as a deprivation of a specifically enumerated right (such as a challenge to a law because it denies "freedom of speech," a phrase specifically included in the Bill of Rights);
- those where a challenged law made it more difficult to achieve change through normal political processes;
- and those where a law impinged on the rights of "discrete and insular minorities."
The Warren Court's doctrine may be seen as proceeding aggressively in these general areas: its aggressive reading of the first eight amendments in the Bill of Rights (as "incorporated" against the states by the Fourteenth Amendment); its commitment to unblocking the channels of political change ("one-man, one-vote"), and its vigorous protection of the rights of racial minority groups. The Warren Court, while in many cases taking a broad view of individual rights, generally declined to read the Due Process Clause of the Fourteenth Amendment broadly, outside of the incorporation context (see Ferguson v. Skrupa, but see also Griswold v. Connecticut). The Warren Court's decisions were also strongly nationalist in thrust, as the Court read Congress's power under the Commerce Clause quite broadly and often expressed an unwillingness to allow constitutional rights to vary from state to state (as was explicitly manifested in Cooper v. Aaron).
Professor Rebecca Zietlow argues that the Warren Court brought an expansion in the "rights of belonging," which she characterizes as "rights that promote an inclusive vision of who belongs to the national community and facilitate equal membership in that community." Zietlow notes that both critics and supporters of the Warren Court attribute to it this shift, whether as a matter of imposing its countermajoritarian will or as protecting the rights of minorities. Zietlow also challenges the notion of the Warren Court as "activist," noting that even at its height the Warren Court only invalidated 17 acts of Congress between 1962 and 1969, as compared to the more "conservative" Rehnquist Court which struck down 33 acts of Congress between 1995 and 2003.
Historically significant decisions
Important decisions during the Warren Court years included decisions holding segregation policies in public schools (Brown v. Board of Education) and anti-miscegenation laws unconstitutional (Loving v. Virginia); ruling that the Constitution protects a general right to privacy (Griswold v. Connecticut); that states are bound by the decisions of the Supreme Court and cannot ignore them (Cooper v. Aaron); that public schools cannot have official prayer (Engel v. Vitale) or mandatory Bible readings (Abington School District v. Schempp); the scope of the doctrine of incorporation (Mapp v. Ohio, Miranda v. Arizona) was dramatically increased; reading an equal protection clause into the Fifth Amendment (Bolling v. Sharpe); holding that the states may not apportion a chamber of their legislatures in the manner in which the United States Senate is apportioned (Reynolds v. Sims); and holding that the Constitution requires active compliance (Gideon v. Wainwright).
- Racial segregation: Brown v. Board of Education, Bolling v. Sharpe, Cooper v. Aaron, Gomillion v. Lightfoot, Griffin v. County School Board, Green v. School Board of New Kent County, Lucy v. Adams, Loving v. Virginia
- Voting, redistricting, and malapportionment: Baker v. Carr, Reynolds v. Sims, Wesberry v. Sanders
- Criminal procedure: Brady v. Maryland, Mapp v. Ohio, Miranda v. Arizona, Escobedo v. Illinois, Gideon v. Wainwright, Katz v. United States, Terry v. Ohio
- Free speech: New York Times Co. v. Sullivan, Brandenburg v. Ohio, Yates v. United States, Roth v. United States, Jacobellis v. Ohio, Memoirs v. Massachusetts, Tinker v. Des Moines School District
- Establishment Clause: Engel v. Vitale, Abington School District v. Schempp
- Free Exercise Clause: Sherbert v. Verner
- Right to privacy and reproductive rights: Griswold v. Connecticut
- Cruel and unusual punishment: Trop v. Dulles, Robinson v. California
Warren took his seat January 11, 1954, on a recess appointment by President Eisenhower; the Senate confirmed him six weeks later. Despite his lack of judicial experience, his years in the Alameda County district attorney's office and as state attorney general gave him far more knowledge of the law in practice than most other members of the Court had. Warren's greatest asset, what made him in the eyes of many of his admirers "Super Chief," was his political skill in manipulating the other justices. Over the years his ability to lead the Court, to forge majorities in support of major decisions, and to inspire liberal forces around the nation, outweighed his intellectual weaknesses. Warren realized his weakness and asked the senior associate justice, Hugo L. Black, to preside over conferences until he became accustomed to the drill. A quick study, Warren soon was in fact as well as in name the Court's chief justice.
When Warren joined the Court in 1954 all the justices had been appointed by Franklin D. Roosevelt or Truman, and all were committed New Deal liberals. They disagreed about the role that the courts should play in achieving liberal goals. The Court was split between two warring factions. Felix Frankfurter and Robert H. Jackson led one faction, which insisted upon judicial self-restraint and insisted courts should defer to the policymaking prerogatives of the White House and Congress. Hugo Black and William O. Douglas led the opposing faction that agreed the court should defer to Congress in matters of economic policy, but felt the judicial agenda had been transformed from questions of property rights to those of individual liberties, and in this area courts should play a more central role. Warren's belief that the judiciary must seek to do justice, placed him with the latter group, although he did not have a solid majority until after Frankfurter's retirement in 1962.
Warren was a more liberal justice than anyone had anticipated. Warren was able to craft a long series of landmark decisions because he built a winning coalition. When Frankfurter retired in 1962 and President John F. Kennedy named labor union lawyer Arthur Goldberg to replace him, Warren finally had the fifth vote for his liberal majority. William J. Brennan, Jr., a liberal Democrat appointed by Eisenhower in 1956, was the intellectual leader of the faction that included Black and Douglas. Brennan complemented Warren's political skills with the strong legal skills Warren lacked. Warren and Brennan met before the regular conferences to plan out their strategy.
Brown v. Board of Education 347 U.S. 483 (1954) banned the segregation of public schools. The very first case put Warren's leadership skills to an extraordinary test. The Legal Defense Fund of the NAACP (a small legal group formed for tax reasons from the much better known NAACP) had been waging a systematic legal fight against the "separate but equal" doctrine enunciated in Plessy v. Ferguson (1896) and finally had challenged Plessy in a series of five related cases, which had been argued before the Court in the spring of 1953. However the justices had been unable to decide the issue and asked to rehear the case in fall 1953, with special attention to whether the Fourteenth Amendment's Equal Protection Clause prohibited the operation of separate public schools for whites and blacks.
While all but one justice personally rejected segregation, the self-restraint faction questioned whether the Constitution gave the Court the power to order its end. Warren's faction believed the Fourteenth Amendment did give the necessary authority and were pushing to go ahead. Warren, who held only a recess appointment, held his tongue until the Senate, dominated by southerners, confirmed his appointment. Warren told his colleagues after oral argument that he believed segregation violated the Constitution and that only if one considered African Americans inferior to whites could the practice be upheld. But he did not push for a vote. Instead, he talked with the justices and encouraged them to talk with each other as he sought a common ground on which all could stand. Finally he had eight votes, and the last holdout, Stanley Reed of Kentucky, agreed to join the rest. Warren drafted the basic opinion in Brown v. Board of Education (1954) and kept circulating and revising it until he had an opinion endorsed by all the members of the Court.
The unanimity Warren achieved helped speed the drive to desegregate public schools, which came about under President Richard M. Nixon. Throughout his years as Chief, Warren succeeded in keeping all decisions concerning segregation unanimous. Brown applied to schools, but soon the Court enlarged the concept to other state actions, striking down racial classification in many areas. Congress ratified the process in the Civil Rights Act of 1964 and the Voting Rights Act of 1965. Warren did compromise by agreeing to Frankfurter's demand that the Court go slowly in implementing desegregation; Warren used Frankfurter's suggestion that a 1955 decision (Brown II) include the phrase "all deliberate speed."
The Brown decision of 1954 marked, in dramatic fashion, the radical shift in the Court's--and the nation's--priorities from issues of property rights to civil liberties. Under Warren the courts became an active partner in governing the nation, although still not coequal. Warren never saw the courts as a backward-looking branch of government.
The Brown decision was a powerful moral statement. His biographer concludes, "If Warren had not been on the Court, the Brown decision might not have been unanimous and might not have generated a moral groundswell that was to contribute to the emergence of the civil rights movement of the 1960s. Warren was never a legal scholar on a par with Frankfurter or a great advocate of particular doctrines, as were Black and Douglas. Instead, he believed that in all branches of government common sense, decency, and elemental justice were decisive, not stare decisis (that is, reliance on previous Court decisions), tradition, or the text of the Constitution. He wanted results that in his opinion reflected the best American sentiments. He felt racial segregation was simply wrong, and Brown, whatever its doctrinal defects, remains a landmark decision primarily because of Warren's interpretation of the equal protection clause.
The one man, one vote cases (Baker v. Carr and Reynolds v. Sims) of 1962–1964, had the effect of ending the over-representation of rural areas in state legislatures, as well as the under-representation of suburbs. Central cities--which had long been underepresented--were now losing population to the suburbs and were not greatly affected.
Warren's priority on fairness shaped other major decisions. In 1962, over the strong objections of Frankfurter, the Court agreed that questions regarding malapportionment in state legislatures were not political issues, and thus were not outside the Court's purview. For years underpopulated rural areas had deprived metropolitan centers of equal representation in state legislatures. In Warren's California, Los Angeles County had only one state senator. Cities had long since passed their peak, and now it was the middle class suburbs that were underrepresented. Frankfurter insisted that the Court should avoid this "political thicket" and warned that the Court would never be able to find a clear formula to guide lower courts in the rash of lawsuits sure to follow. But Douglas found such a formula: "one man, one vote."
In the key apportionment case Reynolds v. Sims (1964) Warren delivered a civics lesson: "To the extent that a citizen's right to vote is debased, he is that much less a citizen," Warren declared. "The weight of a citizen's vote cannot be made to depend on where he lives. This is the clear and strong command of our Constitution's Equal Protection Clause." Unlike the desegregation cases, in this instance, the Court ordered immediate action, and despite loud outcries from rural legislators, Congress failed to reach the two-thirds needed pass a constitutional amendment. The states complied, reapportioned their legislatures quickly and with minimal troubles. Numerous commentators have concluded reapportionment was the Warren Court's great "success" story.
Due process and rights of defendants (1963–66)
In Gideon v. Wainwright, 372 U.S. 335 (1963) the Court held that the Sixth Amendment required that all indigent criminal defendants receive publicly funded counsel (Florida law at that time required the assignment of free counsel to indigent defendants only in capital cases); Miranda v. Arizona, 384 U.S. 436 (1966) required that certain rights of a person interrogated while in police custody be clearly explained, including the right to an attorney (often called the "Miranda warning").
While most Americans eventually agreed that the Court's desegregation and apportionment decisions were fair and right, disagreement about the "due process revolution" continues into the 21st century. Warren took the lead in criminal justice; despite his years as a tough prosecutor, always insisted that the police must play fair or the accused should go free. Warren was privately outraged at what he considered police abuses that ranged from warrantless searches to forced confessions.
Warren’s Court ordered lawyers for indigent defendants, in Gideon v. Wainwright (1963), and prevented prosecutors from using evidence seized in illegal searches, in Mapp v. Ohio (1961). The famous case of Miranda v. Arizona (1966) summed up Warren's philosophy. Everyone, even one accused of crimes, still enjoyed constitutionally protected rights, and the police had to respect those rights and issue a specific warning when making an arrest. Warren did not believe in coddling criminals; thus in Terry v. Ohio (1968) he gave police officers leeway to stop and frisk those they had reason to believe held weapons.
Conservatives angrily denounced the "handcuffing of the police." Violent crime and homicide rates shot up nationwide in the following years; in New York City, for example, after steady to declining trends until the early 1960s, the homicide rate doubled in the period from 1964 to 1974 from just under 5 per 100,000 at the beginning of that period to just under 10 per 100,000 in 1974. Controversy exists about the cause, with conservatives blaming the Court decisions, and liberals pointing to the demographic boom and increased urbanization and income inequality characteristic of that era. After 1992 the homicide rates fell sharply.
The Warren Court also sought to expand the scope of application of the First Amendment. The Court's decision outlawing mandatory school prayer in Engel v. Vitale (1962) brought vehement complaints by conservatives that echoed into the 21st century.
Warren worked to nationalize the Bill of Rights by applying it to the states. Moreover, in one of the landmark cases decided by the Court, Griswold v. Connecticut (1965), the Warren Court affirmed a constitutionally protected right of privacy, emanating from the Due Process Clause of the Fourteenth Amendment, also known as substantive due process. This decision was fundamental, after Warren's retirement, for the outcome of Roe v. Wade and consequent legalization of abortion.
With the exception of the desegregation decisions, few decisions were unanimous. The eminent scholar Justice John Marshall Harlan II took Frankfurter's place as the Court's self-constraint spokesman, often joined by Potter Stewart and Byron R. White. But with the appointment of Thurgood Marshall, the first black justice, and Abe Fortas (replacing Goldberg), Warren could count on six votes in most cases.
Associate justices of the Warren Court
- Hugo Black
- Stanley Forman Reed
- Felix Frankfurter
- William O. Douglas
- Robert H. Jackson
- Harold Hitz Burton
- Tom C. Clark
- Sherman Minton
- John Marshall Harlan II
- William J. Brennan, Jr.
- Charles Evans Whittaker
- Potter Stewart
- Byron White
- Arthur Goldberg
- Abe Fortas
- Thurgood Marshall
- Earl Warren
- Supreme Court of the United States
- History of the Supreme Court of the United States
- Living Constitution
- United States Supreme Court cases during the Warren Court
- Sunstein, Cass Breyer's Judicial Pragmatism University of Chicago Law School. November, 2005. pg. 3-4. ("To many people, the idea of judicial deference to the elected branches lost much of its theoretical appeal in the 1950s and 1960s, when the Supreme Court, under the leadership of Chief Justice Earl Warren, was invalidating school segregation (Brown v. Bd. of Educ.), protecting freedom of speech (Brandenburg v. Ohio) striking down poll taxes (Harper v. Bd. of Elections), requiring a rule of one person, one vote (Reynolds v. Sims), and protecting accused criminals against police abuse (Miranda v. Arizona)."
- Sunstein at 4 ("Is it possible to defend the Warren Court against the charge that its decisions were fatally undemocratic? The most elaborate effort came from John Hart Ely, the Warren Court's most celebrated expositor and defender, who famously argued for what he called a "representation-reinforcing" approach to judicial review. Like Thayer, Ely emphasized the central importance for democratic self-rule. But Ely famously insisted that if self-rule is really our lodestar, then unqualified judicial deference to legislatures is utterly senseless. Some rights, Ely argued, are indispensable to self-rule, and the Court legitimately protects those rights not in spite of democracy but in its name. The right to vote and the right to speak are the central examples. Courts promote democracy when they protect those rights.")
- Sunstein at 4 ("Ely went much further. He argued that some groups are at a systematic disadvantage in the democratic process, and that when courts protect 'discrete and insular minorities,' they are reinforcing democracy too.")
- Schwartz, Bernard (1996) The Warren Court: A Retrospective Oxford University Press, pg. 5. ISBN 0-19-510439-0 (preview)
- Schwartz (1996), pg. 6.
- Tushnet, Mark The Warren Court: in Historical and Political Perspective. (1996). pp 40-42.
- Introduction to Cooper v. Aaron
- Richard H. Fallon, The Dynamic Constitution: An Introduction to American Constitutional Law (2005) p 23
- Zietlow, Rebecca E. The Judicial Restraint of the Warren Court (and Why it Matters). January 23, 2007, available for download at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=960144#PaperDownload
- White (1982) pp 159-61
- Michael R. Belknap, The Supreme Court under Earl Warren, 1953-1969 (2005) pp. 13-14
- In later years Eisenhower remarked several times that making Warren the Chief Justice was a mistake. He probably had the criminal cases in mind, not Brown. See David. A. Nichols, Matter of Justice: Eisenhower and the Beginning of the Civil Rights Revolution (2007) pp 91-93
- Powe (2000)
- See "Biographies: NAACP Legal Defense and Educational Fund, Inc., Teaching Judicial History, fjc.gov"
- See Smithsonian, “Separate is Not Equal: Brown v. Board of Education’’
- For text see BROWN v. BOARD OF EDUCATION, 347 U.S. 483 (1954)
- Robert L. Carter, "The Warren Court and Desegregation," Michigan Law Review, Vol. 67, No. 2 (Dec., 1968), pp. 237-248 in JSTOR
- White, Earl Warren, a public life (1982) p. 208
- White, Earl Warren, a public life (1982) p. 161
- Patterson, Brown v. Board of Education: A Civil Rights Milestone and Its Troubled Legacy (2001)
- James A. Gazell, "One Man, One Vote: Its Long Germination," The Western Political Quarterly, Vol. 23, No. 3 (Sep., 1970), pp. 445-462 in JSTOR
- See REYNOLDS v. SIMS, 377 U.S. 533 (1964)
- Robert B. McKay, "Reapportionment: Success Story of the Warren Court." Michigan Law Review, Vol. 67, No. 2 (Dec., 1968), pp. 223-236 in JSTOR
- See MIRANDA v. ARIZONA, 384 U.S. 436 (1966)
- Ronald Kahn and Ken I. Kersch, eds. The Supreme Court and American Political Development (2006) online at p. 442
- Thomas Sowell, The Vision of the Anointed: Self-congratulation as a Basis for Social Policy (1995) online at p. 26-29
- See ENGEL v. VITALE, 370 U.S. 421 (1962)
- See Griswold v. Connecticut (No. 496) 151 Conn. 544, 200 A.2d 479, reversed
- Michal R. Belknap, The Supreme Court under Earl Warren, 1953-1969 (2005)
- Atkins, Burton M. and Terry Sloope. "The 'New' Hugo Black and the Warren Court," Polity, Apr 1986, Vol. 18#4 pp 621–637; argues that in the 1960s Black moved to the right on cases involving civil liberties, civil rights, and economic liberalism.
- Ball, Howard, and Phillip Cooper. "Fighting Justices: Hugo L. Black and William O. Douglas and Supreme Court Conflict," American Journal of Legal History, Jan 1994, Vol. 38#1 pp 1–37
- Belknap, Michal, The Supreme Court Under Earl Warren, 1953-1969 (2005), 406pp excerpt and text search
- Eisler, Kim Isaac. The Last Liberal: Justice William J. Brennan, Jr. and the Decisions That Transformed America (2003)
- Hockett, Jeffrey D. "Justices Frankfurter and Black: Social Theory and Constitutional Interpretation," Political Science Quarterly, Vol. 107#3 (1992), pp. 479–499 in JSTOR
- Horwitz, Morton J. The Warren Court and the Pursuit of Justice (1999) excerpt and text search
- Lewis, Anthony. "Earl Warren" in Leon Friedman and Fred L. Israel, eds. The Justices of the United States Supreme Court: Their Lives and Major Opinions. Volume: 4. (1997) pp 1373–1400; includes all members of the Warren Court. online edition
- Marion, David E. The Jurisprudence of Justice William J. Brennan, Jr. (1997)
- Patterson, James T. Brown v. Board of Education: A Civil Rights Milestone and Its Troubled Legacy (2001) online edition
- Powe, Lucas A.. The Warren Court and American Politics (2002) excerpt and text search
- Scheiber, Harry N. Earl Warren and the Warren Court: The Legacy in American and Foreign Law (2006)
- Schwartz, Bernard. The Warren Court: A Retrospective (1996) excerpt and text search
- Schwartz, Bernard. "Chief Justice Earl Warren: Super Chief in Action." Journal of Supreme Court History 1998 (1): 112-132
- Silverstein, Mark. Constitutional Faiths: Felix Frankfurter, Hugo Black, and the Process of Judicial Decision Making (1984)
- Tushnet, Mark. The Warren Court in Historical and Political Perspective (1996) excerpt and text search
- Urofsky, Melvin I. "William O. Douglas and Felix Frankfurter: Ideology and Personality on the Supreme Court," History Teacher, Nov 1990, Vol. 24#1 pp 7–18
- Wasby, Stephen L. "Civil Rights and the Supreme Court: A Return of the Past," National Political Science Review, July 1993, Vol. 4, pp 49–60
- White, G. Edward. Earl Warren (1982), biography by a leading scholar
- The Legacy of the Warren Court, Time Magazine, 4 July 1969 | https://en.wikipedia.org/wiki/Warren_Court |
4.40625 | MEASURING Earth’s hidden structures could soon be as simple as looking at your watch – provided it’s a super-accurate atomic clock.
Such clocks are nearly good enough to deliver a detailed geoid, says Ruxandra Bondarescu at the University of Zurich, Switzerland, and her colleagues. The geoid is a model of Earth’s density variations – from the surface down to the mantle – as revealed by anomalies in the planet’s gravitational pull. Knowing the geoid’s shape can aid studies of deeply buried geological structures and show how mass is being redistributed over time, such as by the melting of polar ice sheets.
Geoid measurements use satellite readings of Earth’s gravitational field, but they are limited to a resolution of about 400 kilometres.
General relativity tells us that clocks run slightly faster above sea level due to higher gravitational potential – the potential energy an object has based on its position in a gravitational field – and slower below due to lower gravitational potential. That means an atomic clock carried around the surface could also measure the geoid, Bondarescu says. The team’s calculations show that large clocks in labs are already accurate enough to determine geoid heights to the nearest centimetre (arxiv.org/abs/1209.2889).
The clocks still need to be more portable, but Roger Haagmans at the European Space Agency says miniaturised versions are feasible.
The penultimate paragraph of this article has been edited since it was first posted | https://www.newscientist.com/article/mg21528844.200-atomic-clocks-get-a-grip-on-gravity |
4.53125 | Though typically thought of as a tropical climate pattern, the influence of La Niña (the cold counterpart to El Niño) spreads as far as Antarctica, significantly slowing the melting rate of one of the continent's largest glaciers, according to a new study.
Pine Island Glacier, which makes up about 10 percent of the West Antarctic Ice Sheet, empties into the Amundsen Sea. The glacier's ice shelf (the part of it that floats atop the water and acts kind of like a doorstop to the rest of the glacier) has been thinning since at least the 1970s, when scientists first started recording its behavior. This thinning causes the glacier to flow more quickly toward the sea, and the faster flow drives the thinning of the rest of the glacier. The melting appears to originate from below, as relatively warm ocean water trickles through a gap between the base of the glacier and the land it rests on, lubricating the river of ice and pushing it seaward, where it periodically disintegrates into icebergs (a natural process known as calving).
Researchers previously believed that this disintegration has occurred steadily over time, in concert with steadily increasing average global atmospheric and oceanic temperatures. But new analyses from a team of researchers with the British Antarctic Survey shows that the glacier is more sensitive to sporadic weather and climate anomalies, such as La Niña events, than previously thought. [The Harshest Environments on Earth]
During a La Niña event, cold-water masses extend up the coast of South America and into the central equatorial Pacific. (During an El Niño event, warmer-than-average waters predominate.) Eventually, the cold water gets pulled into a water mass known as the Circumpolar Deep Water, which sweeps by the continental shelf nearby Pine Island Glacier.
Portions of the Circumpolar Deep Water seep under the glacier, but its deepest, coldest depths are barricaded by a ridge in front of the glacier. As a result, usually only warm water can seep under the glacier, since warm water rises above cold water.
But observations taken in January 2012 during a La Niña event show that a mass of cold water appears to have been thick enough to breach the ridge and keep the glacier's underbelly cool, preventing excessive melting and resulting in the lowest summer melting on record, producing about half as much meltwater as that which occurred in January 2010, the last time similar observations of the area were made.
"This enormous, and unexpected, variability contradicts the widespread view that a simple and steady ocean warming in the region is eroding the West Antarctic Ice Sheet," study co-author Pierre Dutrieux, of the British Antarctic Survey, said in a statement.
The researchers were surprised to find that the glacier was so vulnerable to these short-term climate anomalies.
"It is not so much the ocean variability, which is modest by comparison with many parts of the ocean, but the extreme sensitivity of the ice shelf to such modest changes in ocean properties that took us by surprise," study co-author Adrian Jenkins, also from the British Antarctic Survey, said in a statement.
As Pine Island Glacier melts, it contributes to global sea level rise, which could reach 10 to 16 feet (3 to 5 meters) above current levels if the entire West Antarctic Ice Sheet were to melt. The researchers say that if these La Niña events were to occur more frequently in the future, the glacial melting rate would slow down substantially and rising sea levels could subside. However, the scientists say they have no evidence suggesting this will be the case, and they expect the glacier to continue melting and disintegrating through the rest of the century.
The study findings appeared online on Jan. 2 in the journal Science.
Related on LiveScience and MNN: | http://www.mnn.com/earth-matters/wilderness-resources/stories/antarcticas-pine-island-glacier-influenced-by-la-nina |
4.0625 | Ancient Greek and Roman Influence has developed over thousands of years and with time many of us today never realize how much of that past lives today. Greek architecture is one of the most evident and visible influences. Greek architecture consists of Doric, Ionic or Corinthian styles or orders (Scranton, 2010). Most of what we know about Greek architecture comes from the decrepit shell of temples and buildings from the Hellenistic and Roman periods. The Doric style (a more simple style) is sturdy and unadorned. Doric columns have no base and rise from the floor of the building. There’s a capital that forms at the top of the columns; and it consists of two sections the Echinus and the Abacus. This method was used mostly in mainland Greece and southern Italy (Wesley, 2010).
Today you can look out of your bedroom window and see Doric columns used on a porch of a house. You can also find the Doric order on many important buildings. For instance, the Doric columns of the Lincoln Memorial in Washington, DC are a great use of Doric style. It also shows similarities in how we pay homage to people of great importance compared to the Greeks and how they paid homage to their gods.
The Ionic style is more elegant and elaborate of the two orders. At the tiered base of the columns connects Twenty-four flutes separated by narrow vertical bands. The capital is ornamented with volutes, which consist of a pair spirals. This style was found in eastern Greece (Wesley, 2010). At present, you can come across Ionic styles as well. The Ionic style adorns the Bank of America located in downtown Chicago, or the most famous display of Ionic style, the White House.
Another illustration of Greek and Roman influence is Roman law; the laws of the twelve Roman tables. This is the earliest attempt by the Romans to create a code of law, and form a centerpiece of the Roman constitution. In 462 BC, after a 200 year war between plebeians and patricians, a plebian named Terentilius thought it would be fair to write to down laws (Duhaime.org, 2010). “(I)n order that (the patricians) unbounded license might not last forever, he (Terentilius) would bring forward a law that five persons be appointed to draw up laws regarding the consular power, by which the consul should use that right which the people should have given him over them, not considering their own caprice and license as law.” (Duhaime.org, 2010) At that time the plebeians were not aware of basics of roman laws as they were kept secret (Duhaime.org, 2010). “The plebians were in ignorance of the Roman laws, which were a secret of the pontifices (priests) and other patricians and were administered with unfair severity against plebians.” (Duhaime.org, 2010)
These Twelve Tables are the foundation of many court systems including the United States and most other westernized countries. Laws such as, defamation and slander laws are similar. Table Seven:
•If a person slanders another by song, the slanderer shall be clubbed to death (Duhaime.org, 2010). Our laws have evolved in many ways since past days but the structure has given many societies a strong foundation.
Furthermore, Greek and Roman influence are the ones we most enjoy. From our music, art, plays, and literature they have weight in many of our most famous accomplishments in history. From Shakespeare to Steven Spielberg, these ancient cultures sparked some of the most unique dreams. For most of us the most well known influence of Ancient Greece was the Olympics. The Greeks invented this athletic competition. Like today’s Olympic Games, these games took place every... | http://www.studymode.com/essays/Greek-And-Roman-Influences-On-The-808036.html |
4 | written by: Brooke McClendon•edited by: Emma Lloyd•updated: 7/29/2010
How does meiosis allow DNA to be divided into gametes? Learn all about this process, and the less-apparent benefits of sexual reproduction as opposed to asexual reproduction.
slide 1 of 4
Sexual vs. Asexual Reproduction
Organisms can reproduce with or without sex. Bacteria, and other single-celled organisms can simply undergo cell division to reproduce. Plants can create offshoots that will eventually separate from the parent and become an independent plant. Perhaps as a child you cut a worm in half and watched in some sort of awestruck horror as the two halves began to move as if each was its own worm. These are all forms of asexual reproduction. Only one parent and one set of genetic information is involved, and each offspring is a clone of the parent. Asexual reproduction is beneficial to propagate the species in times of environmental stress, or when mates are either few or out of reach.
Sexual reproduction, on the other hand, involves the mixing of genetic information from two parents; not blending, as some traits from one parent will dominate over those of the other parent, but combining. Sexual reproduction is evolutionarily favorable as it creates opportunities for genetic variation and new combinations of genes, but comes at a greater cost. For one, the mixing of genetic information from two parents has unpredictable results. Evolution proceeds randomly. Organisms who have genes that make them well-suited for survival will reach sexual maturity and pass those genes on to their offspring. Conversely, organisms who do not have genes that make them well-suited for survival will most likely die before reaching sexual maturity, removing their genes from the pool. When the goal of a species is to survive, and the goal of the individual to reproduce, that sort of genetic unpredictability may seem too risky. It seems, however, that the risk is minimal compared to the reward. After all, in a world that constantly changes, sexual reproduction can help a species keep up.
slide 2 of 4
How Does Meiosis Allow DNA to Be Dividied Into Gametes?
Gametes are cells that are specialized to carry out sexual reproduction. A key difference between gametes and other cells is that they are haploid, or contain one set of chromosomes. They arise from diploid cells (containing 2 sets of chromosomes) through the process of meiosis. Not just any diploid cell produces gametes through meiosis; diploid cells destined to become haploid gametes reside in ovaries in females and testis in males.
After DNA replication, homologous chromosomes (now 2 paternal and 2 maternal) pair together. Their proximity, and the large amount of shared DNA sequence, can lead to homologous recombination, also called crossing-over. This event accounts for genetic variability. The cell then undergoes not one, but two meiotic divisions, the process of which is similar to that of mitosis (chromosomes are attached to mitotic spindle and pulled to opposite poles, etc). The end result is four gametes, each containing one complete set of single chromosomes. Each gamete will contain an assortment of paternal and maternal chromosomes.
How does meiosis allow DNA to be divided into gametes? By undergoing two successive cellular divisions with only one round of DNA replication beforehand.
slide 3 of 4
slide 4 of 4
Alberts B, Bray D, Johnson A, Lewis J, Raff M, Roberts K, Walter P. 1998. Essential Cell Biology: An Introduction to the Molecular Biology of the Cell. Garland Publishing, Inc: New York. | http://www.brighthub.com/science/genetics/articles/78312.aspx |
4.34375 | If you're seeing this message, it means we're having trouble loading external resources for Khan Academy.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
Solving quadratics by taking square root
In this tutorial you will learn about the most basic way of solving quadratic equations.
Learn how to solve quadratic equations like x^2=36 or (x-2)^2=49.
Sal shows a few examples of quadratic equations whose form allows for solving by simply taking the square root of both sides (after quickly arranging the equation).
Sal solves the equation 2x^2+3=75 by isolating x^2 and taking the square root of both sides.
Solve quadratic equations of the form (x+a)^2-b=0.
Sal discusses the exact order of steps in the process of solving the equation 3(x+6)^2=75.
Sal analyzes a given solution of a quadratic equation, and finds where and what was the error in that process.
Analyze the process of solving a quadratic equation by taking the square root. | https://www.khanacademy.org/math/algebra/quadratics/quadratics-square-root |
4 | The discovery in Brazil of a rock carving estimated to be over 10,000 years old could change theories about when humans first came to the Americas. Brazilian archaeologists found the petroglyph in 2009 in Lapa do Santo, an archaeological site in central Brazil about 35 miles from Belo Horizonte, the capital of Minas Gerais state. It is about 12 inches long and depicts a man with a C-shaped head, three fingers on each hand and a large phallus. A photograph of the figure, which was found about 30 millimeters below a hearth, can be seen via PLoS One.
Writing in the online scientific journal PLoS One, archaeologist Walter A. Neves says that the ”petroglyph found at Lapa do Santo is the oldest, indisputable testimony of rock art in the Americas.” Tests of the sediment covering the carbon have led Brazilian archaeologists to estimate that the petroglyph is older than 10,500 years and could even be as old as 12,000 years.
Neves notes that the petroglyph is invaluable evidence for learning about “the symbolic world of the first humans who settled the New World” due to the extreme rarity in the Americas of “artistic manifestations either as rock-art, ornaments, and portable art objects.” Indeed, says Neves,
These data allow us to suggest that the anthropomorphic figure is the oldest reliably dated figurative petroglyph ever found in the New World, indicating that cultural variability during the Plesitocene/Holocene boundary in South America was not restricted to stone tools and subsistence, but also encompassed the symbolic dimension.
Similar sorts of rock-art have been found in the same region, at Lapa do Ballet and Lapa das Caieiras, and also in other parts of Brazil to the northeast. Finding the petroglyph in Lapa do Santo suggests not only that humans living there had made greater cultural advances than previously thought, but that there was “cultural contact among groups as far apart as 1,600 km [about 994 miles] by the beginning of the Holocene in Eastern South America.”
According to the currently most widely accepted theory, it was 11,000 years ago that humans first crossed the Bering Strait from Siberia to Alaska, after which they gradually migrated southward. The alternate “Clovis theory” says that Clovis people from western north America were on the continent 11,500 years ago. The Lapa do Santo petroglyph provides further material evidence about how long we humans have been in the Americas; might more be found?
Related Care2 Coverage | http://www.care2.com/causes/oldest-rock-art-in-the-americas-found-in-brazil.html |
4.21875 | Birds occur on land, sea and freshwater, and in virtually every habitat, from the lowest deserts to the highest mountains. Our knowledge of bird species can tell us a great deal about the state of the world and wider biodiversity. Patterns of bird diversity are driven by fundamental biogeographic factors, with tropical countries (especially in South America) supporting the highest species richness.
A worldwide total of over 10,000 different species of birds are recognised by BirdLife International, the majority (c.80%) occurring in continental regions, the remainder on islands. Birds occupy a huge variety of habitats and are found at the extremes of latitude and land elevation. This great diversity of land-, water- and seabird species is distributed across the world, and some of the smallest nations have rich bird faunas. Birds are important components of the world’s ecosystems, so the state of the world’s birds tells us a lot about the state of the environment.
However, the distribution of birds is uneven: the different biogeographic realms vary substantially in terms of the numbers and types of bird species they hold (see map). By far the richest is the Neotropical realm, which holds c.36% of all known landbird species. This is followed by the Afrotropical (c.21%), Indomalayan (c.18%), Australasian (c.17%), and then the Palearctic (c.10%), Nearctic (c.8%) and Oceanic (c.2%) realms. Although they have relatively few species in total, the Pacific islands in the Oceanic region are unusually rich for their size; together they hold 20 times more species per unit area than South America, the richest of the continents (Newton 2003). Country by country, the richest territories for avian diversity are Colombia, Peru, Brazil, Ecuador and Indonesia (each with more than 1,500 species), followed by Bolivia, Venezuela, China, India, the Democratic Republic of Congo, Mexico, Tanzania, Kenya and Argentina (all around 1,000 or more; BirdLife International unpublished data).
There is much debate over what factors have been important in driving global patterns in biological diversity. The existence of big geographic differences in bird species diversity is thought to result from the differing conditions experienced over evolutionary time. Particularly influential is the variety (and area) of different habitats present. Tropical forests are especially rich in species; hence the particularly high avian diversity found in the equatorial regions. Other major influences include physical barriers such as impassable oceans and mountain ranges, climatic events such as the recent glacial cycles, biotic constraints such as natural enemies and competing species and, more recently, expanding and pervasive human impacts. The distributions of other taxa are less well-known than those of birds, but they are also determined by these fundamental biogeographic factors. This makes birds a useful starting point for mapping broad-scale patterns in species richness and endemism.
Related Case Studies in other sections
Compiled 2004, updated 2008 and 2013
BirdLife International (2013) Birds are found almost everywhere in the world, from the poles to the equator. Presented as part of the BirdLife State of the world's birds website. Available from: http://www.birdlife.org/datazone/sowb/casestudy/60. Checked: 10/02/2016
|Key message: There are an extraordinary variety of birds| | http://www.birdlife.org/datazone/sowb/casestudy/60 |
4.03125 | systematic communication by vocal symbols. It is a universal characteristic of the human species. Nothing is known of its origin, although scientists have identified a gene that clearly contributes to the human ability to use language. Scientists generally hold that it has been so long in use that the length of time writing
is known to have existed (7,900 years at most) is short by comparison. Just as languages spoken now by peoples of the simplest cultures are as subtle and as intricate as those of the peoples of more complex civilizations, similarly the forms of languages known (or hypothetically reconstructed) from the earliest records show no trace of being more "primitive" than their modern forms.
Because language is a cultural system, individual languages may classify objects and ideas in completely different fashions. For example, the sex or age of the speaker may determine the use of certain grammatical forms or avoidance of taboo words. Many languages divide the color spectrum into completely different and unequal units of color. Terms of address may vary according to the age, sex, and status of speaker and hearer. Linguists also distinguish between registers, i.e., activities (such as a religious service or an athletic contest) with a characteristic vocabulary and level of diction.
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. | http://www.factmonster.com/encyclopedia/society/language.html |
4.28125 | Acceleration is the rate at which an object changes its speed or velocity. The formula for finding acceleration is (Vf - Vi)/T, where Vf is the final velocity, Vi is the initial velocity and T is the time elapsed.
In physics terms, acceleration, a , is the amount by which your velocity changes in
a given amount of time. Given the initial and final velocities, v i and v f , and the ...
Calculating the acceleration of a Porsche. ... I just wanted to show you that; well,
one, how do you calculate acceleration,; and give you a little bit of sense what it ...
The final mathematical quantity discussed in Lesson 1 is acceleration. ... This
equation can be used to calculate the acceleration of the object whose motion is
How to Calculate Acceleration. Acceleration is the rate of change in the velocity
of an object as it ...
In this lesson, we will learn how to determine the acceleration of an object if the ...
The three major equations that will be useful are the equation for net force ...
Jun 13, 2011 ... Calculating the acceleration of a Porshe More free lessons at: http://www.
You might have observed while pushing a bus it suddenly starts. Lift gives
upward push when it starts. This is what acceleration is! Here Velocity changes.
This lesson describes the difference between speed, velocity and acceleration.
Examples are used to help you understand the concept of acceleration...
v = v0 + at - Calculate velocity as a function of initial velocity, acceleration and
time. Free online physics calculators and velocity equations in terms of constant ... | http://www.ask.com/web?q=Calculating+Acceleration&o=2603&l=dir&qsrc=3139&gc=1 |
4.40625 | About Our Infinitive and Gerund Worksheets
This is the infinitive and gerund section of Busy Teacher. You will find 127 worksheets on this topic
as well as a very useful article with some suggestions on how to explain the difference between the two to your students. This worksheet has two straightforward infinitive and gerund practice activities
for your intermediate level students. In both exercises students have to fill in blanks with the correct form of the verb provided. You can adjust the difficulty of the activity by having students work in pairs or groups to complete the exercise; this will obviously make it easier than completing the worksheet individually. Be sure to review the answers as a class before moving on to the next activity. For more ideas, look at the other worksheets in this section.
Students will need to learn when to use the infinitive and gerund forms of a verb. This is a skill that native speakers take for granted. Using the infinitive when a gerund is required just sounds wrong and unnatural but English language learners will not hear that until they have become very familiar with the correct uses of these forms. The key here is practice, practice, practice so include a variety of exercises when studying these topics and be sure to do review exercises that include both. | http://busyteacher.org/classroom_activities-grammar/infinitive_and_gerund-worksheets/ |
4.28125 | Virginia reached its greatest extent in 1609, when King James I issued a second charter to the Virginia Company of London that defined new boundaries for the Virginia colony. The new claims extended north to what is now Maine, south to near the modern North-South Carolina border, and "from sea to sea, west and northwest" all the way to the Pacific Ocean.
Virginia's claims to the Northwest Territory were based on that charter, plus military conquest of Kaskaskia and Vincennes during the American Revolution. Those claims were upheld by colonial and then state officials until 1783.
A third charter issued by King James I in 1612 retained Virginia's rights to "all that Space and Circuit of Land lying from the Sea Coast of the Precinct aforesaid, up into the Land throughout from Sea to Sea West and North-west," plus it expanded the colony's boundaries eastward to include the recently-discovered island of Bermuda.
Virginia has been reduced in size in various ways since 1612. The Virginia Company spun off Bermuda in 1615, when the Bermuda Company obtained a separate charter from King James I for the Plantation of the Somers Isles.1
In 1624, after the Virginia Company had failed to produce a profit and the 1622 uprising demonstrated the company's inability to even defend the colony, King James I revoked the 1612 charter. The conversion of Virginia from a private business into a royal colony altered the basis for government, but the king did not alter any of Virginia's boundaries in 1624.
Despite subsequent legal confusion over the validity of outdated charters - and perhaps in part because of it - Virginia officials have cited the boundaries in the 1609 charter whenever there was an opportunity to assert control over western lands that are today part of western Pennsylvania, Ohio, Indiana, Illinois, Michigan, and Wisconsin.
What the king granted to a few well-connected friends in the Virginia Company, the king could take away. The boundaries of the colony were modified by various actions of the king, from the creation of Maryland in 1632 through the Quebec Act of 1774.
Once Virginia became an independent state, its western boundary was minimized first by a 1781 voluntary land cession of the territory past the Ohio River to the Confederation Congress. That was followed by creation of the State of Kentucky in 1792, then the splitting off of West Virginia during the 1861-65 Civil War. In 1866 the US Congress declared that Berkeley and Jefferson counties were part of West Virginia, and in 1870 the US Supreme Court ruled that action was valid.
On the eastern boundary, Virginia's claim to the seabed of the Atlantic Ocean was eliminated by a 1947 Supreme Court decision, though the US Congress restored rights to the first three miles of the Outer Continental Shelf in the 1953 Submerged Lands Act.
Virginia's Illinois County was created in 1778, based on the conquest by George Rogers Clark and citing claims dating back to the 1609 charter, but that land was ceded to the new United States in 1783 and become the Northwest Territory when the US Congress accepted the land cession in 1784
Map Source: ESRI, ArcGIS Online
boundaries of Virginia in 1784, after ceding northwestern land claims to the national government but before the creation of Kentucky (1792) or West Virginia (1863)
When English kings created other colonies such as Maryland and Carolina, they changed the northern and southern boundaries of Virginia. Grants to other colonies included overlapping claims to western lands. The western edge of Virginia remained unclear when King George III issued the Proclamation of 1763. That proclamation settlement in the Mississippi River watershed, where the French had relinquished their claims in the 1763 Treaty of Paris that ended the Seven Years War.
The western edge of authorized settlement was moved through negotiations with the Cherokee (1768 Treaty of Hard Labor, followed by the 1770 Treaty of Lochaber in 1770) and the Iroquois (1768 Treaty of Fort Stanwix). Those treaties opened most of the desirable land in the Ohio River watershed for colonial settlement, though Shawnee and other nations disputed the authority of the Cherokee and Iroquois to permit colonial settlement there.
England's tacit acceptance of Virginia's western land claims ended with the 1774 Quebec Act. It transferred responsibility for granting ownership for most lands west of the Appalachian Mountains from the restless colonies such as Virginia, Maryland, Pennsylvania, and New York to the more-loyal province of Quebec.
the Proclamation of 1763 attempted to create an Indian Reserve and block colonial settlement on lands west of the Allegheny Mountains (shaded green), limiting Virginia's ability to issue legal title to lands granted to the Loyal Land Company, the Ohio Land Company, and others
Source: Library of Congress, A New map of North America from the latest discoveries (1763)
During the fifth of the Virginia "conventions" held after the colonial government collapsed and before the General Assembly was created, Virginia leaders declared independence. In the first Virginia constitution adopted by that convention in June, 1776, the new Commonwealth of Virginia blatantly ignored King George III's transfer of western lands to Quebec, a colony which stayed loyal to King George III during the American Revolution.
The Virginians were more accommodating to those other 12 colonies who joined the fight against Great Britain. In June, 1776, the Fifth Convention adopted a constitution that defined the new state's boundaries as:2
In addition to Virginia, other colonies were constrained by the Proclamation of 1763. Thomas Jefferson listed a specific complaint about the Quebec Act in the Declaration of Independence, when he justified the 13 colony's break from King George III:3
After declaring independence, Virginia asserted in its first constitution that the state boundaries included the vast territory west of Pennsylvania/New York and north of the 36° 30' parallel (the border with North Carolina).
In 1778, Virginia troops led by George Rogers Clark captured British encampments at Kaskaskia and Vincennes in the Illinois territory. The Virginia General Assembly then created Ilinois County, encompassing all the lands west of the Ohio River.
Military conquest of the northwest, extending to the Mississippi River, added a key element to Virginia's claim over the land. King George III had tried to define western boundaries for the colonies in the Quebec Act, and Virginia troops alone had been responsible for defeating the king's claim. As viewed in Virginia, what the king lost... Virginia gained.
Virginia's control over the territory between the Ohio River and the Great Lakes generated conflicts between states with competing land claims, and with states that had no such claims. If Virginia was allowed to sell its western lands and pocket the revenue, plus control elections and administer the area, then the other states would see Virginia's economic and political power grow at their expense. Whatever the reasons in the different colonies for declaring independence, in no case did the other 12 states relish the prospect of replacing domination from England with domination from Virginia.
A "congress" of the colonies started meeting formally in 1774. The rebellious colonies/states started fighting the American Revolution together starting in 1775. Virginia had been the first state to ratify the proposed Articles on Confederation in 1777. Maryland, Delaware and New Jersey delayed, demanding that states with western land claims relinquish them before creating a new national union.
Over half of the states participating in the Continental Congress (MA, CT, NY, VA, NC, SC, and GA) had charters with no fixed boundary on their western edge, or cited other authorities to justify overlapping claims to the some of the same territory claimed by Virginia.
The other six states (NH, RI, NJ, PA, DE, and MD) had no justification to assert authority over western lands, but still had great interest regarding how that land would be sold, settled, and administered. Those six states feared that in the future, as population grew, the landlocked states would lose power and influence as the other states expanded.
The ultimate solution was to have the all the states establish a clear territorial line on their western borders, to give land claims west of those borders to the new national government, and to create new states from that public domain. That solution limited the future growth of the seven states with land claims, while income from land sales in the new national territory could be used to pay off the national government's debts from the Revolutionary War. The new public domain also provided a way for the Congress to honor land grants promised as bounties for serving in the Continental Army.
George Rogers Clark captured Kaskaskia and Vincennes during the American Revolution, cementing Virginia's claims to the territory northwest of the Ohio River
Source: National Park Service, Fort Jefferson, 1780-1781: A Summary Of Its History
Virginia's land borders were modified between the 1632 creation of Maryland (yellow) and the 1870 Supreme Court ruling that Berkeley and Jefferson counties (red) were legally added to West Virginia by Congressional action in 1866
Source: Franklin K. Van Zandt, Boundaries of the United States and the Several States, Geological Survey Professional Paper 909 (p.144)
Legislators in the General Assembly knew that most trade in the region would be via boats going down the Ohio and Mississippi rivers to New Orleans, rather than to Virginia's ports on the Chesapeake Bay. Virginia's leaders were willing to transfer the state's claims to the national government, and did not plan to retain political control permanently over lands northwest of the Ohio River.
Affirming the claims of Maryland/Pennsylvania to the boundaries defined in charters to Calvert and Penn, plus agreeing to create new states from lands northwest of the Ohio River, would reduce interstate conflicts. Though it had been hard to capture Kaskaskia and Vincennes, it would be even harder to govern them from Richmond.
Congress encouraged Virginia's land cessions. Creating a public domain, a vast stretch of land controlled by a united national government rather than having separate states retain control over lands stretching to the Mississippi River, would tighten the bonds between Virginia and the other 12 breakaway colonies and would focus colonial efforts on winning independence from Great Britain.
Arranging the land cession involved complicated politics. After Delaware ratified the Articles of Confederation on February 1, 1779, Maryland was left as the last holdout.4
Virginia reacted to the delay by opening a land office in 1779. It started to sell western lands to raise revenue, and issued patents for land based on old military warrants issued for service in the French and Indian War.
In response, the Continental Congress called for all states to freeze their land sales:5
Virginia's General Assembly objected strongly to interference by the nascent national government in the independent state's internal affairs. The legislature issued a "remonstrance" on December 14, 1779:6
Maryland's resistance postponed adoption of the Articles of Confederation for three years.
New York led the land cession process by example on January 17, 1780. It abandoned its flimsy claim that treaties with the Six Nations had conveyed the land claims of the Iroquois, obtained by right of conquest, to territory southwest of Lake Ontario. New York authorized its delegates to the Continental Congress to transfer the state's rights (if any...) to lands outside of its established boundaries.
Congress then finessed the issue over national vs. state authority by simply asking the other states to mimic New York.
Virginia's legislature relinquished its land claims on January 2, 1781. It imposed conditions that required the Congress to:7
Those conditions would validate property ownership of Virginia speculators, but eliminate the potential value of various land companies chartered outside of Virginia. In particular, those conditions blocked land claims of the Transylvania colony proposed by Judge Richard Henderson, and ensured Virginia's claim of authority over Kentucky.
Virginia included in its January 2, 1781 action that:8
Maryland finally signed the Articles of Confederation on March 1, 1781, and the 13 colonies officially became one nation.
A spur for Maryland's action was a series of raids by the British navy and privateers throughout the Chesapeake region in 1780. When Maryland asked the French to provide ships to block the raids, the French responded with a suggestion that Maryland should first ratify the Articles of Confederation. It did so slightly more than six months before Lord Cornwallis surrendered at Yorktown.
Virginia renewed its cession offer on October 20, 1783, without requiring a guarantee of the state borders. That finessed the previous requirement that the other states acknowledge Virginia's authority over Kentucky. Congress accepted Virginia's new cession offer on March 1, 1784.9
Massachusetts and Connecticut, like Virginia, had charters that established land claims extending into the Ohio Country
Source: Bureau of Land Management, A History of the Rectangular Survey System
A remaining complication: the Congress chose rejected the Virginia conditions, as established on January 2, 1781. Virginia's land cessions were not accepted immediately. The treaty of Paris between the United States and England in 1783 established the Mississippi River as the western edge of the new nation, but debate regarding state land claims west of the Ohio River continued until 1784.
In the two years of further discussion, the other states sought to gain control of the Virginia lands between the Proclamation Line of 1763 and the Ohio River. Only one succeeded; Connecticut managed to obtain rights to a "Western Reserve" in what is now Ohio.
In March 1784, Thomas Jefferson wrote to George Washington on the urgency of Virginia improving its transportation network westward from the Potomac River to the Ohio River, so Virginia rather than New York would capture the future trade between the "Northwest" and ports on the Atlantic coast. In that letter, he justified why Virginia should retain western lands to the Ohio River, especially the Kanwaha River watershed, but allow new states to be formed from lands beyond the Ohio River:10
Virginians played a major role in shaping the management of the new Northwest Territory. Through the Land Ordinance of 1785, Thomas Jefferson proposed the system by which lands would be surveyed before sale, using what evolved into the Public Land Survey System. He would play an even greater role in acquiring Federal territory west of the Mississippi River, through the Louisiana Purchase.
The Northwest Land Ordinance of 1785 defined how the new public domain of the national government would be sold, and protected one remaining claim of Virginia to lands northwest of the Ohio River. Virginia had set aside lands in Kentucky for soldiers and sailors who had served in the state or national forces during the American Revolution - one incentive to enlist or remain in the military was the potential value of the land grants. Virginia was concerned that the designated Kentucky lands on the Cumberland River, between the Green and Tennesse, would not be sufficient to redeem all bounties issued for enlisting.
Just in case, Virginia's cession to the Congress identified a 4.2 million acre Virginia Military Reserve in Ohio, where those who had served in Virginia's military forces during the American Revolution (and those who had purchased the land rights from the soldiers and officers...) could claim their property:11
one of the first maps issued after the end of the American Revolution finessed the western boundaries of Virginia and Pennsylvania by omitting them completely
Source: Library of Congress, The United States according to the definitive treaty of peace signed at Paris Sept. 3d. 1783 (William McMurray, 1784) | http://www.virginiaplaces.org/boundaries/cessions.html |
4.1875 | Have a read about Two's complement, it's not just an operation - it's the representation of signed numbers.
I'll give you a start
char x = 0xFF;
On most systems, char is default signed (although this is not guaranteed). 0xFF represented in binary is 1111 1111. The first bit is set to '1' in the signed character, this implies that the value is in fact a negative value. Implementing two's complement, you will find that the variable 'x' is equal to negative 1.
This takes care of the explanation for one of the cases. What do you think happens when you assign a negative signed integer to a unsigned integer?
This post has been edited by jjl: 18 January 2013 - 10:51 PM
"Try it and see" is a dangerous philosophy that leads to unportable code. This code invokes implementation-defined behaviour as indicated by §184.108.40.206 of the C standard. Consider if the implementation were to decide to raise a signal indicating that the code above has performed a computational error, and the signal handler were to return. This code would, for that implementation, be in the realms of undefined behaviour.
@jjl: Have you ever heard of ones' complement or sign and magnitude?
Furthermore, this code depends upon CHAR_BIT. There are existing systems where CHAR_BIT is 16, and a signed char can store values between the range of -32767 and 32767. On such systems, char x = 0xFF; would result in x storing the positive value 255. This has nothing to do with signed representation.
The output for that program is entirely implementation defined.
"Try it and see" is a dangerous philosophy that leads to unportable code.
If someone is going to post some code (especially something basic) and ask what it does without having made any apparent effort, I think running it is a valid way to figuring it out. Once the result is known, explanation is a good next step.
There are several things wrong with the code provided. A couple of these problems actually lead to undefined behaviour. The use of the "%x" format specifier in the printf() is one example of undefined behaviour. The "%x" format specifier is only valid for an unsigned int value. And the C standard states that using a specifier that doesn't match the type is undefined behaviour. From Section 220.127.116.11 The fprintf function.
If a conversion specification is invalid, the behavior is undefined.282) If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.
Since the OP stated that a char is a signed char, assigning a value of 0xFF will produce undefined behaviour, if this char is also an 8 bit value, since overflow of a signed integral value is undefined.
A decent compiler should reject the code as provided, since no include files were used the arguments for the printf() call should not match and it should be able to detect the problem with the signed char overflow. So in my opinion the telling the OP to try to compile this program is a valid response to the code dump.
By the way here are the errors and warnings my compiler produced:
main.c|2|error: return type defaults to ‘int’|
main.c||In function ‘main’:|
main.c|4|warning: overflow in implicit constant conversion [-Woverflow]|
main.c|8|error: implicit declaration of function ‘printf’ [-Wimplicit-function-declaration]|
main.c|8|warning: incompatible implicit declaration of built-in function ‘printf’ [enabled by default]|
||=== Build finished: 2 errors, 2 warnings ===| | http://www.dreamincode.net/forums/topic/307946-how-to-store-signed-char-into-an-int/page__pid__1786178__st__0 |
4.28125 | Excel is a powerful spreadsheet program, and knowing some basic formulas can make manipulating your data much more manageable. One of the most useful functions is the multiplication function. This guide will walk you through the several different ways to multiply values in Excel.
Multiplying Two or More Numbers in a Cell
1Create your formula. All formulas in Excel start with the = symbol. After the = sign, enter the numbers your want to multiply, separated by the * sign. After entering the numbers, press the Enter key and the answer will replace your formula in the cell. For example:
- The cell you entered the formula in should display 24.
- If you enter an "x" instead of an "*" Excel will attempt to correct it for you.
2Adjust your formula. Even though the result is displayed in the cell, the formula remains in the formula bar above the worksheet. You can adjust the numbers however you’d like and the new result will be displayed in the cell.
3Multiply multiple numbers. You can add multiple numbers to the formula. Simply add the numbers with the appropriate symbols. For example:
- The cell you entered the formula in should display 6.
Multiply Different Cells
1Input your data. Make sure that you have the correct data in the correct cells. The formulas will use whatever number is in the cells that are referenced. Here's a sample layout:
2Multiply two different cells. This is the most basic of the cell multiplication formulas. Click on the cell that you want the result displayed in. For our example, we'll multiply the first cell by the fourth. The formula would look like:
- The cell that you enter the formula in should display 35.
- The result will update automatically if you change any of the numbers in the reference cells. For example, if A1 was changed to 20, the result frame would change to 140.
3Multiply a range of cells. You can multiply any connected range of cells by using the PRODUCT formula. Click on the cell that you want the result displayed in. Start the PRODUCT formula by typing:
- Select your range. After starting the formula, you can either select the range by dragging a box with your mouse, or by entering it manually. Enter the starting cell and the ending cell, separated by a ":". For our example:
- The cell that you entered the formula in should display 2100.
4Multiply a range of numbers together by another number. You can adjust the PRODUCT formula to multiply the entire range, and then multiply that figure by another number. To do this, enter the range formula as above, and add the other number separated by a ",". For our example:
- This will take the original output, and multiply it by 2. The cell that you entered the formula in should display 4200.
Multiply a Range of Numbers by a Number
1Enter the number you want to multiply by. Type the number you want to multiply by into a blank cell. Right-click on it and select Copy.
2Select your number range. Place each number in a separate cell. Once you have entered all of your numbers, select the cells. If the cells are not located next to each other, you can hold Ctrl and click on each individual cell that you want to select.
3Multiply by the copied number. Once the cells are selected, right-click on any of the highlighted cells and click Paste Special… This will open the Paste Special menu. In the Operation section, click the Multiply option. Press the OK button.
- All of the cells that were highlighted will be changed to the result of the multiplication operation. Note that the formula will not be in the formula bar; this operation changes the numbers directly.
Questions and Answers
Give us 3 minutes of knowledge!
- When using the PRODUCT formula to calculate the product of a range, you can select more than just one column or row. For example, your range could be =PRODUCT(A1:D8). This will multiply all of the values of the cells in the rectangle defined by the range (A1-A8, B1-B8, C1-C8, D1-D8).
Categories: Microsoft Excel
In other languages:
Italiano: Fare Moltiplicazioni con Excel, Español: multiplicar en Excel, Português: Multiplicar no Excel, Deutsch: Multiplizieren in Excel, Français: faire des multiplications dans Excel, 中文: 在 Excel 中进行乘法运算, Русский: умножать в Excel, Bahasa Indonesia: Membuat Fungsi Perkalian dengan Excel, Nederlands: Vermenigvuldigen in Excel
Thanks to all authors for creating a page that has been read 359,739 times. | http://www.wikihow.com/Multiply-in-Excel |
4.15625 | A carbon price is a cost applied to carbon pollution to encourage polluters to reduce the amount of greenhouse gas they emit into the atmosphere. Economists widely agree that introducing a carbon price is the single most effective way for countries to reduce their emissions.
Climate change is considered a market failure by economists, because it imposes huge costs and risks on future generations who will suffer the consequences of climate change, without these costs and risks normally being reflected in market prices. To overcome this market failure, they argue, we need to internalise the costs of future environmental damage by putting a price on the thing that causes it – namely carbon emissions.
A carbon price not only has the effect of encouraging lower-carbon behaviour (eg using a bike rather than driving a car), but also raises money that can be used in part to finance a clean-up of "dirty" activities (eg investment in research into fuel cells to help cars pollute less). With a carbon price in place, the costs of stopping climate change are distributed across generations rather than being borne overwhelmingly by future generations.
There are two main ways to establish a carbon price. First, a government can levy a carbon tax on the distribution, sale or use of fossil fuels, based on their carbon content. This has the effect of increasing the cost of those fuels and the goods or services created with them, encouraging business and people to switch to greener production and consumption. Typically the government will decide how to use the revenue, though in one version, the so-called fee-and-dividend model – the tax revenues are distributed in their entirety directly back to the population.
The second approach is a quota system called cap-and-trade. In this model, the total allowable emissions in a country or region are set in advance ("capped"). Permits to pollute are created for the allowable emissions budget and either allocated or auctioned to companies. The companies can trade permits between one another, introducing a market for pollution that should ensure that the carbon savings are made as cheaply as possible.
To serve its purpose, the carbon price set by a tax or cap-and-trade scheme must be sufficiently high to encourage polluters to change behaviour and reduce pollution in accordance with national targets. For example, the UK has a target to reduce carbon emissions by 80% by 2050, compared with 1990 levels, with various intermediate targets along the way. The government's independent advisers, the Committee on Climate Change, estimates that a carbon price of £30 per tonne of carbon dioxide in 2020 and £70 in 2030 would be required to meet these goals.
Currently, many large UK companies pay a price for the carbon they emit through the EU's emissions trading scheme. However, the price of carbon through the scheme is considered by many economists to be too low to help the UK to meet its targets, so the Treasury plans to make all companies covered by the scheme pay a minimum of £16 per tonne of carbon emitted from April 2013.
Ideally, there should be a uniform carbon price across the world, reflecting the fact that a tonne of carbon dioxide does the same amount of damage over time wherever it is emitted. Uniform pricing would also remove the risk that polluting businesses flee to so-called "pollution havens"' – countries where a lack of environmental regulation enables them to continue to pollute unrestrained. At the moment, carbon pricing is far from uniform but a growing number of countries and regions have, or plan to have, carbon pricing schemes in place, whether through cap-and-trade or carbon taxes. These include the European Union, Australia, South Korea, South Africa, parts of China and California.
• This article was written by Alex Bowen of the Grantham Research Institute on Climate Change and the Environment at LSE in collaboration with the Guardian
The ultimate climate change FAQ
This editorial is free to reproduce under Creative Commons
This post by The Guardian is licensed under a Creative Commons Attribution-No Derivative Works 2.0 UK: England & Wales License.
Based on a work at theguardian.com | http://www.theguardian.com/environment/2012/jul/16/carbon-price-tax-cap |
4.53125 | Central cylindrical projection
The Central cylindrical projection is a cylindrical map projection. This is achieved by projecting, from the center of the Earth (hence perpendicularly to the surface), the Earth's surface onto a cylinder tangent to the equator. The cylinder is then cut along one of the projected meridians and unrolled into a flat map.
The distortion in the regions beyond the equator is so pronounced (much worse than in the Mercator projection, which is sometimes erroneously presented as the central cylindrical) that the central cylindrical is not frequently used as a practical projection.
It is not known who first developed the projection, but it appeared with other new cylindrical projections in the 19th century, and regularly finds its way into textbooks (chiefly to illustrate that this is not the way Mercator is constructed).
This projection has prominent use in panoramic photography where it is usually called “cylindrical projection”. It can present a full 360° panorama and preserves vertical lines; also, unlike equirectangular and Mercator, it preserves scale along vertical objects such as buildings, which is important for architectural scenes.
- Flattening the Earth: Two Thousand Years of Map Projections, John P. Snyder, Chicago University Press, 1993, pp. 106-107, ISBN 0-226-76747-7.
- World Maps and Globes, Irving Fisher and O. M. Miller, Essential Books, 1944, p. 46.
|This cartography or mapping term article is a stub. You can help Wikipedia by expanding it.| | https://en.wikipedia.org/wiki/Central_cylindric_projection |
4.40625 | * Open House is scheduled for this Thursday, September 24th from 6-7pm. The children have been working hard and would love to show off all their work to you.
Reading/Language Arts: Unit 1 Week 3 Communities
The theme this week is communities.
Our essential question is: How do people from different cultures contribute to a community?
Comprehension Strategy: Ask and Answer Questions means as they read, good readers ask questions about the story and look for answers in the text.
Comprehension Skill: Sequence: Sequence is the order in which a story’s events happen; the beginning, middle, and end. Clue words: first, next, then, and finally.
Genre: Narrative Nonfiction: Autobiography- tell the story of a person’s life, tells events in chronological order, and written about the author by the author.
Compound Words- two smaller words that make one word. The smaller words tell you the meaning of the compound word.
Good Writers have a clear beginning, middle and end. We will write a “How to” pop popcorn first and then carry it over to a paragraph on how to carve a pumpkin.
Spelling: Our spelling words have the final e sound.
During math this week, our class will model division as equal sharing. We started this lesson on Friday with the book The Doorbell Rang. The class will learn that one way to divide is to find the number in each group. This can be done by equal sharing. Our next lesson of the week will show how division and subtraction are related and use models, such as a number line, to show this. We will check our progress after this to make sure we understand the concepts so far.
Our exploration of rocks continues this week. Students will explore 12 rocks and become the scientists by grouping them according to what they believe the rocks should be grouped as. When this process is complete, they will learn that there are three types of rocks: igneous, sedimentary, and metamorphic. The class will also complete the rock cycle wheel and sing a song! | http://staff.bbhcsd.org/kucharskie/page/2/ |
4.1875 | Dr. Martin Luther King, Jr. was one of the greatest leaders of our time, and I believe it's important to honor him and recognize the impact that he had on America. However, as a classroom teacher I struggled to find appropriate teaching materials for elementary students on this topic. Most resources were either too vague to be effective or too detailed and overwhelming for 4th and 5th graders. My students were old enough to start learning about racial inequality and the civil rights movement, and they needed to have opportunities to explore and discuss these concepts.
So I was excited to discover that BrainPOP.com has a wonderful free video called Dr. Martin Luther King, Jr. that's perfect for upper elementary students! It's just 4 minutes long and features two lovable animated characters, Tim and Moby. Somehow Tim and Moby manage to explain the important events in Dr. King's life and his impact on the civil rights movement in a way that's very easy to understand.
BrainPOP.com has a number of resources to go along with the video like an online quiz and several activity pages. But I wanted to create a few more items to include cooperative learning lessons, extended vocabulary practice, and discussion cards. I must admit that I got a bit carried away! I ended up creating a 13-page packet of supplementary materials! The packet is aligned with CCSS standards for informational text for grades 3 through 6, although the lessons might take a bit of adapting if used with 3rd graders. The discussion activities are also aligned with speaking and listening standards.
During the month of January, you can download this teaching packet for free by clicking on the image which will take you to my Seasonal Page on Teaching Resources. You can find loads of other seasonal activities there, too! If you don't find it there, sign up for my email newsletter, Candler's Classroom Connections, and look for the link to Laura's Best Freebies, a you'll find this on the private page of freebies for subscribers.
If you like this free lesson, you'll enjoy my January Activities Mini Pack which has a character map activity that involves listening to a story and recording details about Dr. King's life. I hope you enjoy these activities, and please follow me on TpT to be sure you receive notifications when I add new freebies!
Show N Tell Tuesday! She selected 5 freebies to feature and then opened the blog post up to other freebies through a link up. So far there are 16 more freebies for Dr. Martin Luther King, Jr. Be sure to check it out! | http://corkboardconnections.blogspot.com/2012/01/honoring-dr-martin-luther-king-jr.html |
4.09375 | Greenhouse gases are generally any gas molecules or compounds in the atmosphere which interact with outgoing infrared radiation from the sun and thus contribute to the Greenhouse Effect by warming the atmosphere. The major constituents of the atmosphere such as nitrogen (N2) and oxygen (O2) do not interact with infrared radiation and, thus are not greenhouse gases. The major greenhouse gas in the atmosphere is water vapour (H2O) but it has a short life time in the atmosphere as part of the natural water cycle. The human emissions of water vapour are not considered to have altered its effect. Hence, it is not regarded as an anthropogenic greenhouse gas. The six major types of anthropogenic greenhouse gases (listed in Annex A of the Kyoto Protocol) are: carbon dioxide (CO2); methane (CH4); methane (CH4); nitrous oxide (N2O); hydroflurocarbon (HFCs); perflurocarbons (PFCs) and sulphur hexafluoride (SF6)
The Greenhouse Effect is the name given to the natural properties of the atmosphere that allow incoming solar energy to pass through the atmosphere without warming it, while outgoing infrared radiation warms the atmosphere. The natural Greenhouse Effect is essential for life on the Earth and keeps average temperature about 33 0C warmer than if there were no greenhouse gases and clouds in the atmosphere. Under ideal conditions the global energy balance is in equilibrium, i.e. 342 Wm-2 incoming solar radiation is balanced by 235 Wm-2 of outgoing long wave and 107 Wm-2 of reflected solar radiation. Disruption of this system causes global warming or cooling.
Emissions of different greenhouse gases have different effects on climate change due to the difference in the parts of infrared spectrum that they interact with and also due to the duration of their presence in the atmosphere. Some gases persist in the atmosphere for extremely long period of time and continue have effect on it. For example, while bulk of CO2 from fossil fuels will be removed in several centuries, about 25% will continue to affect the atmosphere for many thousand years.
Measurement of Greenhouse Gases
To account for different effects of greenhouse gases in a single measurement, the Kyoto Protocol uses a unit known as “carbon dioxide equivalent” (CO2 –e or CO2-eq). This term is used in two different ways to provide a single metric, both for greenhouse gas emissions (in tonnes of CO2-e) and the greenhouse gas concentration in the atmosphere (in ppm CO2-e). When used in the context of greenhouse gas emissions, it refers to the amount of CO2 that would give the same warming effect as the effect of the greenhouse gas or greenhouse gases being emitted over a given timeframe. Generally, 100 years is the standard time timeframe that is used.
The amount of a greenhouse gas can be converted to carbon dioxide equivalents by multiplying it by its Global Warming Potential. The Global Warming Potential (GWP) of different GHG is different depending upon the chemical and physical properties of the molecules. The intergovernmental Panel on Climate Change have set the Global Warming Potential of CO2 as 1. In contrast, GWP of methane (CH4) over 100 years timeframe is 21 and therefore, emission of 1 tonne of methane (CH4) equals to 21 tonnes of CO2-e over 100 years.
Simplifying GHG Emissions
In reality, it is very difficult to conceptualize precisely what a “tonne of carbon dioxide” or other greenhouse gas is as many people are not familiar with the nature of greenhouse gases in a technical sense and they are invisible to human years. For example, CO2 is a colourless and odourless gas, which cannot be perceived through direct senses. Furthermore, since the volume occupied by a gas varies with temperature and pressure, at sea level and 25 0C, one tonne (1000 kg) of pure CO2 occupies 556.2 m3 of space, which is equivalent to an invisible cube slightly larger than 8m8mx8m.
In an attempt to turn greenhouse gases into something visual, tangible, and measureable for the public, the Victorian government have used the analogy of ‘Black Balloons” escaping into the atmosphere as a visual image for greenhouse gases in major campaign for energy efficiency to reduce greenhouse gas emissions. The advertisements show electrical appliances producing black balloons, each of which represents 50g of greenhouse gas, floating up from houses into the sky.
Republished by Blog Post Promoter | http://www.cleantechloops.com/greenhouse-gases/ |
4.03125 | * This is the Professional Version. *
Types of Viral Disorders
Categorizing viral infections by the organ system most commonly affected (eg, lungs, GI tract, skin, liver, CNS, mucous membranes) can be clinically useful, although certain viral disorders (eg, mumps) are hard to categorize. Many specific viruses and the disorders they cause are also discussed elsewhere in The Manual.
The most common viral infections are probably URIs. Respiratory infections are more likely to cause severe symptoms in infants, the elderly, and patients with a lung or heart disorder.
Respiratory viruses include the epidemic influenza viruses (A and B), H5N1 and H7N9 avian influenza A viruses, parainfluenza viruses 1 through 4, adenoviruses, respiratory syncytial virus A and B, human metapneumovirus, and rhinoviruses (see Some Respiratory Viruses and see Respiratory Viruses). In 2012, a novel coronavirus, Middle East respiratory syndrome coronavirus (MERS-CoV—see Middle East Respiratory Syndrome (MERS)), appeared in Kuwait; it can cause severe acute respiratory illness and is sometimes fatal. Respiratory viruses are typically spread from person to person by contact with infected respiratory droplets.
Some Respiratory Viruses
Gastroenteritis is usually caused by viruses (see Gastroenteritis) and transmitted from person-to person by the oral-fecal route. Age group primarily affected depends on the virus:
Local epidemics may occur in children, particularly during colder months.
The main symptoms are vomiting and diarrhea.
No specific treatment is recommended, but supportive care, particularly rehydration, is important.
A rotavirus vaccine that is effective against most pathogenic strains is part of the recommended infant vaccination schedule (see Table: Recommended Immunization Schedule for Ages 0–6 yr). Hand washing and good sanitation measures can help prevent spread.
Some viruses cause only skin lesions (as in molluscum contagiosum and warts—see Viral Skin Diseases); others also cause systemic manifestations or lesions elsewhere in the body (see Some Exanthematous Viruses). Transmission is typically from person to person; alphaviruses have a mosquito vector.
Some Exanthematous Viruses
At least 5 specific viruses (hepatitis A, B, C, D, and E viruses) can cause hepatitis; each causes a specific type of hepatitis (see Viral Hepatitis and see Hepatitis). Hepatitis D virus can infect only when hepatitis B is present. Transmission is from person to person by contact with infected blood or body secretions or by the fecal-oral route for hepatitis A and E.
Other viruses can affect the liver as part of their disease process. Common examples are cytomegalovirus, Epstein-Barr virus, and yellow fever virus. Less common examples are echovirus, coxsackievirus, and herpes simplex, rubeola, rubella, and varicella viruses.
Most cases of encephalitis are caused by viruses (see Some Neurologic Viruses and Brain Infections). Many of these viruses are transmitted to humans by blood-eating arthropods, mainly mosquitoes and ticks (see Arboviridae, Arenaviridae, and Filoviridae); these viruses are called arboviruses (arthropod-borne viruses). For such infections, prevention includes avoiding mosquito and tick bites.
Some Neurologic Viruses
Certain viruses cause fever and a bleeding tendency (see Some Viruses That Cause Hemorrhagic Fever and Arboviridae, Arenaviridae, and Filoviridae). Transmission may involve mosquitoes, ticks, or contact with infected animals (eg, rodents, monkeys, bats) and people. Prevention involves avoiding the means of transmission.
Some Viruses That Cause Hemorrhagic Fever
Some viruses cause skin or mucosal lesions that recur and may become chronic (see Some Viruses That Cause Recurrent or Chronic Skin or Mucosal Lesions). Mucocutaneous infections are the most common type of herpes simplex virus infection (see Herpes Simplex Virus (HSV) Infections). Human papillomavirus causes warts (see Warts); some subtypes cause anogenital and oropharyngeal cancer (see Genital Warts and Cervical Cancer). Transmission is by person-to-person contact.
Some Viruses That Cause Recurrent or Chronic Skin or Mucosal Lesions
Enteroviruses, which include coxsackieviruses and echoviruses (see Enteroviruses), can cause various multisystem syndromes, as can cytomegaloviruses (see Some Viruses That Cause Multisystem Disease and Cytomegalovirus (CMV) Infection). Transmission is by the fecal-oral route.
Some Viruses That Cause Multisystem Disease
Some viruses cause nonspecific symptoms, including fever, malaise, headaches, and myalgia (see Some Viruses That Cause Nonspecific Acute Febrile Illness and see Table: Arbovirus, Arenavirus, and Filovirus Diseases). Transmission is usually by an insect or arthropod vector.
Rift Valley fever rarely progresses to ocular disorders, meningoencephalitis, or a hemorrhagic form (which has a 50% mortality rate).
Some Viruses That Cause Nonspecific Acute Febrile Illness
Drug NameSelect Brand Names
Was This Page Helpful?
* This is the Professional Version. * | http://www.merckmanuals.com/professional/infectious-diseases/viruses/types-of-viral-disorders |
4.25 | In order to grasp the concept of molar mass calculations it is important to understand the molar unit. The mole also called mol is the basic unit of measurement in chemistry. By definition, in modern chemistry, one mole represents the number of carbon atoms in exactly 12 grams of carbon 12 isotope. Remember that carbon-12 has an atomic mass of 12 (six neutrons and six protons).
One mole of anything, however, contains 6.0221367E23 of that object. This is known as Avogadro's number.
Obviously it would be impossible to count out 6.0221367E23 atoms. Remember, however that 1 mole of carbon-12 = 12 grams = 6.0221367E23 atoms. It has been established that 1 mole of any element = the atomic mass of that element expressed in grams. Since magnesium has an atomic mass of 24, one mole of magnesium weighs 24 grams and contains 6.0221367E23 atoms of magnesium. A mole of any molecule = the molecular mass of that molecule expressed in grams. In order to determine the weight of one mole of bananas, one would have to get an average weight of a banana and multiply that by 6.0221367E23 then we could weigh out that weight of bananas and presto, we would have a mole of bananas. Of course, nobody would ever do that. It just demonstrates that mole of anything = 6.0221367E23 and we can measure out a mole of something by counting it or by weighing it out. Since atoms are too small to count, we must weigh out a mole of atoms.
Molar massis a unit that enables scientists to calculate the weight of any chemical substance, be it an element or a compound. Molar mass is the sum of all of the atomic masses in a formula. Once one determines the molar mass of a substance, it will be easy to measure out one mole of that substance.
The molar mass calculation of a substance is complete the following steps (We will use sulfuric acid, H2SO4, as an example):
In this example the results have been rounded off to the correct number of decimal places. (Since the atomic mass average of sulfur given above only has 3 decimal places, accuracy can not be determined beyond that point).
These calculations will be necessary before one can determine the molarity or normality of a solution and many other formulas in stoichiometry (the quantitative relationships between chemical substances in a chemical equation).
Formulas can have a max of two brackets open at the same time, and the molecule of crystallization must be placed last.
If you need to cite this page, you can copy this text:
Roberta Barbalace, Kenneth Barbalace. Molar Mass Calculations and Molecular Weight Calculator. EnvironmentalChemistry.com. 1996. Accessed on-line: 2/13/2016 | http://environmentalchemistry.com/yogi/reference/molar.html |
4.3125 | Gastroenteritis is a condition that causes irritation and inflammation of the stomach and intestines (the gastrointestinal tract).
The most common symptoms of gastroenteritis are
Many people also refer to gastroenteritis as "stomach flu." This can sometimes be confusing because influenza (flu) symptoms include:
- muscle aches and pains, and
- respiratory symptoms, but influenza does not involve the gastrointestinal tract.
The term stomach flu presumes a viral infection, even though there may be other causes of infection.
Viral infections are the most common cause of gastroenteritis; but bacteria, parasites, and food-borne illnesses (such as from shellfish that has been contaminated by sewage or from consuming raw or undercooked shellfish from contaminated water) can also be the offending agents. Many people who experience vomiting and diarrhea that develops from these types of infections or irritations think they have "food poisoning," when they actually may have a food-borne illness.
Travelers to foreign countries may experience "traveler's diarrhea" from contaminated food and unclean water.
The severity of infectious gastroenteritis depends on the immune system's ability to resist the infection. Electrolytes (these include essential chemicals like sodium, potassium and chloride) may be lost in vomit and diarrhea fluid.
Most people recover easily from a short episode of vomiting and diarrhea by drinking clear fluids to replace the fluid that was lost and then gradually progressing to a normal diet. But for others, especially infants and the elderly, the loss of bodily fluid with gastroenteritis can cause dehydration, which can be a life-threatening illness unless it is treated and fluids in the body are replaced.
The most recent data from the CDC show that deaths from gastroenteritis have increased dramatically. In 2007, 17,000 people died from gastroenteritis, overwhelmingly, these people were older and the most common infections were Clostridium difficile and norovirus.
Gastroenteritis has many causes. Viruses and bacteria are the most common.
Viruses and bacteria can be contagious and can spread through the consumption of contaminated food or water. In up to 50% of diarrheal outbreaks, no specific agent is found. The infection can spread from person to person because of improper handwashing following a bowel movement or handling a soiled diaper.
Gastroenteritis caused by viruses may last one to two days. However, some bacterial cases can continue for months.
Medically Reviewed by a Doctor on 12/31/2014
Must Read Articles Related to Gastroenteritis
Abdominal Pain in Children
Abdominal pain in children can range from trivial to life-threatening. Some possible causes of abdominal pain in children are: infections, food related (food al...learn more >>
Diarrhea can be caused by bacterial or viral infections, parasites, intestinal diseases or conditions, reactions to medications, and food intolerance. Symptoms ...learn more >>
Food poisoning is caused by learn more >>
Patient Comments & Reviews
The eMedicineHealth doctors ask about Gastroenteritis (Stomach Flu): | http://www.emedicinehealth.com/gastroenteritis/article_em.htm |
4.09375 | Chloroplasts, the parts of plant cells responsible for photosynthesis, are green because they contain the pigment chlorophyll, which absorbs the red and blue wavelengths of light and reflects back the green wavelengths. Chlorophyll absorbs particular colors of light to provide the right amount of energy for photosynthesis to take place. Photosynthesis is the process by which plants use the energy from the sun to convert carbon dioxide from the air into carbohydrates for food.Know More
Chloroplasts are found in cells located in the leaves of plants. The chlorophyll in the chloroplasts gives the leaves their distinct green coloring. Chloroplasts develop fully when they are exposed to light.
Each chloroplast has a double layer of membranes that protects its structures. Stroma, a thick, enzyme-rich fluid, fills the area enclosed by the membranes. Layered structures called grana, which contain chlorophyll, are located throughout the stroma.
When a chloroplast is exposed to light, photosynthesis starts. The light is absorbed by the chlorophyll and converted to chemical energy in the grana. Then enzymes in the stroma begin a series of reactions which use the chemical energy to convert carbon dioxide molecules into carbohydrate molecules. The plant uses the carbohydrates for growth and respiration. Extra carbohydrates are stored for later use.Learn more about Botany
In plant cells, chloroplasts perform photosynthesis, a process that converts light energy from the sun into chemical energy in the form of glucose. Plants can later use this stored chemical energy to carry out activities integral to life, such as growth and reproduction.Full Answer >
Chloroplasts are the organelles found in plant cells that contain chlorophyll and undergo photosynthesis. The chlorophyll pigments found in the chloroplasts are the main pigments utilized in photosynthetic reactions.Full Answer >
Onion cells lack chloroplasts because the onion is part of the plant that is not involved in photosynthesis. The part of the plant eaten by humans is called the bulb, and it resides at the base of the plant. The bulb’s primary purpose is energy storage and holding the flower for the second growing season. Growing near the ground, the bulb is in poor position to collect sunlight.Full Answer >
In oxygenic photosynthesis, cyanobacteria and the chloroplasts of plants and algae turn carbon dioxide, water and photons into carbohydrates and free oxygen. Only some of this oxygen is used by the organism; the rest is released into the atmosphere.Full Answer > | http://www.ask.com/science/chloroplasts-green-8c9401bf3dc89c3d |
4.25 | Nine instructive, fun chemistry puzzles that supplement topics related to the STRUCTURE of the ATOM, ELEMENTS and their NAMES and SYMBOLS, PERIODIC TABLE arrangement and use, BOHR Models, are on this file.
Your students will learn while enjoying
Ten instructive, fun chemistry puzzles and worksheets that supplement topics related to COMPOUNDS, EMPIRICAL FORMULAS, IONIC ATTTRACTION/BONDING, NAMING COMPOUNDS, AVOGADRO'S NUMBER, NUCLEAR CHEMISTRY and more are here.
Graphic organizers for
Fun puzzles that teach and familiarize students with terminology, in attractive formats.
Included: Solids, Gasses, and Liquids, Graham's and Boyle's Law, Carbon and Hydrocarbon, Alkali and pH.
Students love these puzzles. Answer keys provided for
I use this PowerPoint series for my Unit I- Matter and Change- in my high school chemistry class, but this could also be used for other science classes. Within the 35 slides, the concept of matter is introduced and further explained in three parts: | https://www.teacherspayteachers.com/Store/Science-By-Verellen |
4.0625 | TODAY, we will look at Section C which makes up a substantial part of the SPM 1119 English Paper Two. This section consists of two parts – Reading Comprehension and Summary Writing, which carry 10 and 15 marks respectively.
The reading comprehension questions aim to test your understanding of the passage as well as vocabulary. Among the skills tested are recognising general and specific ideas, finding important details and guessing meaning from context.
Guidelines for comprehension
1. Read the whole passage through once to get a general idea of what the passage is about. Do not worry if you come across unfamiliar words. Sometimes, it is not necessary to understand every word you read.
2. Read the passage a second time, if necessary. The second reading helps you take in the details and improve your understanding.
3. Read the questions carefully. Use cue words in the questions to help you answer the questions. These can be the “wh” words (what, when, where, why, who, whose, how) and action verbs (identify, find, list).
4. Questions sometimes contain words found in the passage. Use these words to help you identify the part of the passage where the answer can be found.
5. You can lift clauses or sentences from the passage to answer questions. You do not have to use your own words unless you are told to do so. Moreover, there is a danger in paraphrasing – you might alter/distort the meaning expressed in the passage.
6. For questions on vocabulary, if you are asked for a word, then give only ONE word and nothing else. Make sure you spell the word correctly. If you are asked for a phrase, then give the relevant phrase.
7. Some questions require you to use your own words and you must do so.
8. Do pay attention to the tense used in the questions when formulating your answers.
Pitfalls to avoid
1. Do not give more than the required information. Sometimes, students copy chunks from a text, giving two or more sentences. This only highlights their weakness – failure to understand the question and/or text.
2. Do not give two or more answers to a question. Some students write down all the possible answers to a question just to be on the safe side.
3. Do not waste time paraphrasing answers unless you are asked to do so.
Many students are concerned about summary writing for several reasons: they are unable to identify information relevant to the answer and are unable to put the information together into a coherent paragraph. Weak students have an additional problem to grapple with – language. While these concerns are genuine, there is no reason to fret as these problems can be easily overcome with proper guidance and help from teachers.
Let me remind you that summary writing is not a writing skill. It is largely a reading skill (you are required to select relevant information in the text) with a bit of writing thrown in (you have to string the points together into a unified text).
The task is made easier for you as you do not need to summarise the whole text, only certain aspects (usually two). Therefore, it is crucial that you read the question carefully and consider what information is relevant.
Remember, you need to identify at least 10 points (for content). So do not worry too much about paraphrasing. Focus on getting marks for content, not language.
Summary writing involves specific skills such as the following:
Selection – This means choosing information that is relevant to your answer. Information that is relevant to your answer depends on the aspect(s) of the text you are to summarise.
Condensation – This means reducing the length of the given information while preserving the important points. This can be done by omitting unimportant details, or using single words to replace phrases or clauses.
Reorganisation or rearrangement – This means taking the given information and arranging it in a different way.
Paraphrasing or restatement – This means saying something in a different way, without changing the meaning.
Guidelines for summary writing:
1. Read the question carefully. Ask yourself: “What am I required to summarise”.
2. Mark the first and last lines of the passage you are asked to refer to.
3. Then select information that is relevant to your answer. To do this, underline the relevant lines or ideas as you read the text. Always ask yourself: “Is this??” (For the summary below, you would ask: “Is this what Yunus did to help the poor? Is this an improvement in the lives of the women?”).
4. Look through the lines/ideas you have underlined.
5. Summarise these ideas, using condensation, reorganisation or paraphrasing skills.
6. If you cannot paraphrase ideas, see if there are words in the text that you can replace.
7. Begin the summary with the 10 words given and remember that the three dots after the tenth word mean you have to complete the sentence with some relevant information from the text.
8. Organise the ideas/points in the manner in which they are found in the text.
9. Adhere to the word limit. Writing more than the required number of words will not get you any marks. Anything far too short of the word limit means you lack content.
10. Pay attention to the tense (and sometimes pronoun) used in the given 10 words.
11. Write the summary in one paragraph.
Pitfalls to avoid
1. Do not include information not in the text.
2. Do not include your own ideas or opinions.
3. Do not spend too much time paraphrasing as you might end up losing marks for content unless you can do so without altering/distorting meaning.
4. Do not repeat ideas. Sometimes, an idea is repeated in the text and you may not notice it as it may have been paraphrased.
5. Do not include material from other lines in the text.
Let’s take a look at a sample reading text.
Study the passage below and see how the questions that follow have been answered. The answers to the comprehension questions have been put in bold in the passage while those to the summary have been underlined.
1 Dr Muhammad Yunus, a banker and economist, was awarded the Nobel
Peace Prize in 2006 for his efforts to improve the economic and social
conditions of the disadvantaged poor in Bangladesh. Hailed as a rare
visionary, Yunus, who leads a simple life, wants to eradicate poverty from
the world. 5 |
2 Dr Yunus’ fight against poverty began during the famine of 1974 which killed
1.5 million Bangladeshis. As a professor of economics at Chittagong
University, he was teaching his students that the longer you work the more
you earn. Yet, this economic theory did not seem to work in Bangladesh. He
was dumbfounded to learn that people were starving despite working 12
hours every day of the week. 10
3 With the help of his students he set out to learn why these people were living
in poverty. In the village of Jobra, near Chittagong University he came across
women who made bamboo furniture. These women had to borrow money
from moneylenders to buy the raw materials needed to make bamboo stools. They were also forced to sell these stools to the moneylenders to repay
them. Their profit of 0.50 Bangladeshi taka was barely enough for them to
support their families. 15
4 Dr Yunus discovered that these people were at the mercy of moneylenders who
charged high interest rates for loans given out. They had no choice but to turn to moneylenders because traditional banks refused to give them small
loans at affordable interest rates. Moreover, the banks considered these
people repayment risks. 20
5 This prompted him to set up Grameen Bank in 1976 which gives out small
loans or microcredit to destitute Bangladeshis. Loans as little as US$30 are
given to very poor people to start their own businesses. Grameen has certain
conditions - borrowers must be women. 96% of Grameen’s borrowers
are women. Yunus discovered that women were more careful and
responsible about their loans as 99% of them usually repay their loans. 25
6 Another brainchild of Dr Yunus’ is the Grameen Phone or Village Phone. With
the Village Phone, the rural popluation of Bangladesh are now able to enjoy
phone connectivity. Besides, this is another project which provides rural
women with business opportunities. It works along the same principles of
Grameen Bank, where rural women are given small loans to buy cellular
phones so that they can set up “public call centres” at their homes. The
women then use the money they earn to settle their loans. 35
7 Dr Yunus’ ideas have saved not only the poor from death but also given a new
strength to women. Before Grameen, many Bangladeshi women were
viewed as useless and a burden by their fathers and husbands. This largely
stems from the traditional view that the man is the sole bread winner in a
family. With no means to earn money, some of these women especially
widows were forced to beg. Now, with microcredit, the women have proven
to the men that they too are capable of taking care of their families and
supporting them financially. Their self-esteem has also improved as they are
now active financial contributors to the family. With more money in hand
these women are able to provide better nutrition for their children. Grameen
is also responsible for the improved social status of women as men seem to
show them more respect. 45
8 Today Grameen also provides education and housing loans. Financing is also
available for irrigation projects and other economic activities. But its main
principle remains – helping the poor. Despite the success of Grameen, Dr Yunus
is still not satisfied with the changes he has brought about. There is a lot
more work that needs to be done to improve the living conditions of the poor.
Dr Yunus hopes to use part of the US$1.4 million
award he received to set up a company to produce low-cost, high
nutrition food for the poor. The other plan he has is to set up an eye hospital
for the poor in Bangladesh. 50
Source: adapted from http://en.wikipedia.org/wiki/Grameen_Bank and
1. From paragraph 1,
(a). why was Dr Yunus given the Nobel Peace Prize?
(b). what is Dr Yunus’ aim in life?
2. (a). From paragraph 2, what is “this economic theory”?
(b). From paragraph 3, what did the women do with the money they borrowed from the moneylenders?
3. From paragraph 4, give two reasons why poor Bangladeshis could not obtain loans from traditional banks?
4. From paragraph 5,
(a). why are 96% of Grameen’s borrowers women?
(b). find a word which has the same meaning as poor.
5. (a). From paragraph 7, how has Grameen changed men’s treatment of women?
(b). From paragraph 8, provide evidence that proves that Yunus is a selfless man.
Dr Muhammad Yunus is truly a selfless person who has dedicated his life to helping poor people.
Write a summary about:
· what Dr Yunus has done to help the poor and
· how his ideas have helped change the lives of women.
Your summary must
· be in continuous writing
· not be longer than 130 words, including the 10 words given below.
· use material from lines 24 to 49
Credit will be given for use of own words but care must be taken not to change the original meaning.
Begin your summary as follows:
Dr Yunus’ mission to help the poor improve their lives began?
What Dr Yunus has done to help the poor
set up Grameen Bank
which provides small loans to poor Bangladeshis
to help them start their own businesses
set up Grameen Phone
rural population of Bangladesh are now able to enjoy phone connectivity.
provides rural women with business opportunities
How his ideas have helped change the lives of women
the women have proven to the men that they too are capable of taking care of their families and supporting them financially
their self-esteem has also improved
active financial contributors
they are active contributors to the family
with more money in hand they are able to provide better nutrition for their children
improved social status of women as men show them more respect
Dr Yunus’ mission to help the poor improve their lives began when he set up Grameen Bank. Grameen provides small loans to poor Bangladeshis to enable them to start their own businesses. Besides Grameen Bank, Dr Yunus also set up Grameen Phone, which enables the rural population of Bangladesh to enjoy phone connectivity. This venture also provides rural women with business opportunities. Dr Yunus’ ideas have helped change the lives of women for the better. They are now capable of taking care of their families and supporting them financially. Their self-esteem has improved as they now take an active role contributing financially. With more financial control, they are able to provide better nutrition for their children. Most importantly, men show them more respect now than they did earlier. (126 words)
Note: This summary has deliberately not been paraphrased to show you that you can actually lift ideas/sentences from the passage to write a coherent piece. | http://www.thestar.com.my/story/?file=%2F2007%2F10%2F7%2Feducation%2F19097360 |